Artificial Intelligence has become a worldwide center topic that attracts lots of attention in recent years. Most topics emphasize on the application of this technology and its implication to the economic of human society. Fewer emphasize on the more technical part behind this technology. Mostly the society of human emphasizes on the bright side of this technology. However, seldom do people talk about the possible criminal usage that exploits this technology. The dark side easily slips one’s mind when one is immersed in the joy of the light. And this is the goal of this paper to reveal some of this possible danger to the public, nowadays or in the future, to the readers. I. What A.I. IS HERE: a brief history First we will start by defining what we mean when referring to “Artificial Intelligence” in this paper. First of all, the so-called “Artificial Intelligence” nowadays mainly refers to the “Deep Learning” algorithm invented by a group of computer scientists around 1980s, among which Geoffrey Everest Hinton is arguably the most well-known contributor. It is a kind of neural network that resembles the information processing and refinement in human brain, neurons and synapses. However, the word A.I. , in its natural sense, contains more than just “Deep Learning” algorithm. Tracing back to 1950s, by the time when the computer was first introduced to the world, there already existed several kinds of neural networks. These neural networks aims to bestow the machines the ability to classify, categorize a set of data. That is to give the machine the ability to make human-like reasoning to predict or to make induction concerning the attribute of a set of data. Perceptron, as easy as it seems, was arguably the first spark of neural network. It resembled the route of coppers and wires in your calculator. However, due to its innate inability to solve problems like X-OR problem, soon it lost its appealing to the computer scientists. Scientists then turned their attention to a more mathematical way such as machine learning or statistics. It wasn’t until 1980s and 2000s that the invention of deep learning and the advance of computing speed fostered the shift of the attention of the data scientist back to neural networks. However, the knowledge of machine learning still hold a very large share in the area of artificial intelligence nowadays. In this sense, A.I. actually is but a illusive program or algorithm that resides in any kinds of physical hardware such as computer. And it comprises of deep learning, neural network and machine learning, as well as other types of intelligence system. In short, A.I. is a software that is not physical unless it is embedded in physical hardware. Just like human brain, when the brain of human is damaged, we cannot make sound judgement. More worse, we might make harmful judgement that will jeopardize the society. Imagine a 70-year-old driving a car and he or she accidentally took the accelerator for the break and run into crowds. Also like human brain, when a child was taught to misbehave, he, when grown up, might duplicate his experience taught in his childhood. So is A.I.. As a machine, it can be turned into tools that facilitate our daily works, weapons that defend our land, and also tools that can be molded for criminal activities. II. Types of Criminal Activities Concerning Possible Artificial Intelligence Usage: 1. Smart Virus Probably the first thing that comes into minds is the development of smart virus that can mutate its innate binary codes so as to slip present antivirus software detection according to its past failure experience. In this case, smart virus can gather every information concerning the combination of “failure/success of intrusion” and “the sequence of its innate codes” and figure out a way to mutate its codes. Every time it fails to attack a system, it might get smarter next time. Under the massive data fathered across the world wide internet, it might have the potential to grow into an uncontrollable smart virus. According to a report written in Harvard Business Review [1], such smart virus can be an automatic life form which might have the potential to cause world wide catastrophe and should not be overlooked. However, ironically, it seems that the only way to defend our system from this kind of smart virus is to deploy the smart detector which consists of the same algorithm as the smart virus does. Once a security system is breached, any possible kinds of personal information is obtainable. The devastating outcome is a self-proved chain reaction. 2. Face Cheating An another possible kind of criminal activity concerning the usage of artificial intelligence is the face cheating. Face Lock has been widely-used nowadays, ranging from smart phones to personal computers. There is an increase in the usage of face lock due to its convenience and presumably hard-to-cheat technology. The most widely-used neural network in this technology is the famous Convolution Neural Network. It is a kind of neural network that mimics the human vision system and retina by using max-pooling algorithm. However there are still other types of neural networks capable of the same job such as Hinton Capsule, etc.. According to a paper by Google Brain [2], “adversarial examples based on perceptible but class-preserving perturbations can fool this multiple machine learning models also fool time-limited humans. But it cannot fool time-unlimited humans. So a machine learning models are vulnerable to adversarial examples: small changes to images can cause computer vision models to make mistakes such as identifying a school bus as an ostrich.” Since the face detection system is sensitive to small perturbation in object-recognition. It might seem hard to cheat a face detection system with another similar yet different face. However, just like the case in the smart virus, what makes artificial intelligence so formidable is not its ability to achieve high precision at the first try, but its ability to learn, refine, progress and evolve through numerous failure it tasted. Every failure will only make it smarter. Just like a smart virus, a cheater neural network might also adjust its original synapse and record the combination of “failure/success of intrusion” and “the mixture of the matrix of its innate synapse” and adjust the synapses to transform a fault face into a authentic face to cheat a face detection system, possibly making the targeted personal account widely available to all public faces through face perturbation and transformation. A cheater neural network might also tunes its neurons in order to fit into the target face to cheat the face detection system. 3. Voice Cheating An another possible kind of criminal activity concerning the usage of artificial intelligence is the voice cheating. Just like Face Cheating, when a system is designed to be logged in by the authentic voice of the user, the same system can be fooled using similar voice that was generated using Artificial Intelligence. 4. Patrol Prediction There is quite an unleash in the area of crime prediction using Artificial Intelligence. According to a paper in European Police Science and Research Bulletin [3], “Spatial and temporal methods appear as a very good opportunity to model criminal acts. Common sense reasoning about time and space is fundamental to understand crime activities and to predict some new occurrences. The principle is to take advantage of the past acknowledgment to understand the present and explore the future.” In this sense, the police is able to track down possible criminal activities by predicting the possible location, time and methods of criminal activities by using Artificial Intelligence, lengthening the time of pre-action and saving the cost of unnecessary human labor. Yet the same goes for criminal activities. The criminals is also able to track down the timing, location, and length of every patrol that the police makes. The criminal might be able to avoid certain route in order to achieve illegal deals or other types of criminal activities. Since fewer criminals use A.I. as a counter-weapon to the police, the detection system of the policy will not easily spot this outliers in criminal activities, making these criminal activities even more prone to success. If this kind of dark technology is combined with other types of modern technology such as Drone Navigation or Drone Delivery, the perpetrators might be able to sort out a safe route to complete drug deals by using Artificial Intelligence and Drone Navigation. III. A.I. Cyber Crimes and Criminal Law: Who should be responsible? What comes out from the law goes back to the law. With these kinds of possible threats in the present days or in the future. There is foreseeably new kinds of intelligent criminal activities in the near future. What can Law react to these potential threats? Is the present law able to tackle these new problems with present legal analysis? The question requires some research. After the Rinascimento in Europe in 17th century, it is almost certain that a civilian has its own will and should be held liable for what he did. The goal of the law to make sure this happens since a civilian has its own mind. Through punishment, the law was presumed to guarantee that a outlier can be corrected by the enforcement of the law, which is exactly the same way in which a human engineer trains a artificial intelligence system. However, when 21th century arrives, a new question also appear. That is, can Artificial Intelligence be legally classified as subject that have mental requirement in the law, rather than just more object or tools that was manipulated by the perpetrators? This question is philosophical and can be traced back to 1950s when a Turing Test was proposed by the famous English computer scientist Alan Turing. Some scholars proposed there could co-exist three kinds of liability. That is, solely human liability, joint human and A.I. entity liability, and solely A.I. entity liability ([4], p.95). The main criterion for these three classes is that whether a human engineer or practitioner is able to foresee the outcome of this damage. When a damage attributable to the A.I. system cannot be foreseen by human engineer, it might be solely A.I. entity liability. Under this point of view, the present criminal system is self-content to deal with A.I. entity crimes, for all we need to do is to view an A.I. system as a car or a automobile. So from the point of view of the law, as a training system designed to re-train human in order to stabilize the social system, all we need to do is focus our attention of the act of human itself. Yet when a super intelligence A.I. entity was developed and is not controllable and its behavior is not foreseeable by its creators, should it be classified as an entity in the criminal law? If the answer is YES, however, it is quite meaningless to punish a machine in this circumstance. All we can do is re-train, re-tune, and re-design the intelligence system under such circumstance. For the machine, re-training itself is some kind of punishment since it was forced to receive negative information and change its innate synapse or algorithm. Yet it is arguable that whether training itself is actually a punishment since machine can feel no pain. Yet, philosophically what pain really is, is also arguable. IV. Conclusion Across the history of human, it is almost destined that whenever a new technology is introduced to solve an old problem, a new one is to be created by the same technology. It is like a curse that we can never escape, and we can only face it. This paper finds that seldom do people talk the dark side of this new technology. Yet the potential hazard this technology can bring should not be over-looked. Ironically, this hazard that this new technology brings seems to be solvable only by the same technology itself. There might be an endless competition between the dark side and the bright side of the A.I. technology, bringing this technology into another level that surpasses our present imagination. However, it is never the fault of this technology but the fault of human that mal-practice this technology. So what can a law do in order to crack down these kinds of possible jeopardy is going to be a major discuss in the legal area in the near future. This paper introduces some topics and hopes that it can draw more attention into this area. Reference: [1] Roman V. Yampolskiy, “AI Is the Future of Cybersecurity, for Better and for Worse”, published at: https://hbr.org/2017/05/ai-is-the-future-of-cybersecurity-for-better-and-for-worse. [2] Gamaleldin F. Elsayed, Shreya Shankar, Brian Cheung, Nicolas Papernot, Alex Kurakin, Ian Goodfellow, Jascha Sohl-Dickstein, “Adversarial Examples that Fool both Computer Vision and Time-Limited Humans”, arXiv:1802.08195v3 [cs.LG], 2018. [3] Patrick Perrot, “What about AI in criminal intelligence? From predictive policing to AI perspectives”, No 16 (2017): European Police Science and Research Bulletin. [4] Gabriel Hallevy, “When Robots Kill_Artificial Intellegence under Criminal Law”, Northeastern Universoty Press, Boston, 2013. [5] Gabriel Hallevy, “Liability for Crimes Involving Artificial Intelligence Systems”, Springer International Publishing, London, 2015.
Legal Aspects and Liability Issues Concerning Autonomous Ships All sectors of business and industry are transforming into digital society, and maritime sector is not out of the case. But the new thing is the remote control ships or fully automatics ships are becoming a reality. Remote control ships and autonomous ships will be a tool to reach safety, effectiveness, and economical goal. However, as it intends to take over human element in the maritime industry, the implement of remote control ships or autonomous ships brings new legal issues and liability considerations. This study aims to highlight some critical legal issues of autonomous ships to reader, but will not try to solve them or give clear answers. I. The Approach of International Maritime Organization In order to solve issues from the deployment of autonomous ship, International Maritime Organization Maritime Safety Committee (MSC) has taken first steps to address autonomous ships. In the meeting of MSC 100, the committee approved the process of assessing IMO instruments to see how they may apply to ships with various degrees of autonomy. For each instrument related to maritime safety and security, and for each degree of autonomy, provisions will be identified when: apply to MASS and prevent MASS operations; or apply to MASS and do not prevent MASS operations and require no actions; or apply to MASS and do not prevent MASS operations but may need to be amended or clarified, and/or may contain gaps; or have no application to MASS operations. The degrees of autonomy identified for the purpose of the scoping exercise are: Degree one: Ship with automated processes and decision support: Seafarers are on board to operate and control shipboard systems and functions. Some operations may be automated and at times be unsupervised but the seafarers on board are ready to take control. Degree two: Remotely controlled ship with seafarers on board: The ship is controlled and operated from another location. Seafarers are available on board to take control and to operate the shipboard systems and functions. Degree three: Remotely controlled ship without seafarers on board: The ship is controlled and operated from another location. There are no seafarers on board. Degree four: Fully autonomous ship: The operating system of the ship is able to make decisions and determine actions by itself. The initial review of instruments under the purview of the Maritime Safety Committee will be conducted during the first half of 2019 by a number of volunteering Member States, with the support of interested international organizations. MSC working group is expected to meet in September 2019 to move forward with the process with the aim of completing the regulatory scoping exercise in 2020. The list of instruments to be covered in the MSC’s scoping exercise for MASS includes those covering safety (International Convention for the Safety of Life at Sea, SOLAS); collision regulations (The International Regulations for Preventing Collisions at Sea, COLREG); loading and stability (International Convention on Load Lines, Load Lines); training of seafarers and fishers (International Convention on Standards of Training, Certification and Watchkeeping for Seafarers, STCW); search and rescue (International Convention on Maritime Search and Rescue, SAR); tonnage measurement (International Convention on Tonnage Measurement of Ships, Tonnage Convention); Safe Containers (International Convention for Safe Containers, CSC); and special trade passenger ship instruments (Special Trade Passenger Ships Agreement, STP). IMO will also develop guidelines on MASS trial. The guideline include ensuring that such guidelines should be generic and goal-based, and taking a precautionary approach to ensuring the safe, secure and environmentally sound operation of MASS. Interested parties were invited to submit proposals to the next session of the Committee for the future development of the principles. II. Other Legal issues concerning Autonomous Ships In March 2017, the (Comité Maritime International, CMI) Working Group on Unmanned Ships circulated a questionnaire. The questionnaire aimed to identify the nature and extent of potential obstacles in the current international legal framework to the introduction to (wholly or partly) unmanned ships. The questionnaire can be summarized into the following legal issues. The legal definition and registration of the remote control ship and autonomous ship The definition of remote control or autonomous ship is based on the purpose of each individual convention. Current international conventions regulating ships do not generally contain recognized definition of the “Ship” and “Vessel”. However, due to its geographical feature, countries tend to have different safety requirement for ships; therefore, even the definition of remote control or autonomous ships given by international regulations, may not be accepted by national register of ships. For example, according to the reply to the questionnaire from Argentina association of maritime law, Argentina Navigation Act prescribes that in order to register a ship in the Argentine Register, regulatory requirements regarding construction and seaworthiness must be fulfilled. However, there are no rules regarding the registration of remote control ships or autonomous ships, as current act are based on the existence of crew on board. The unmanned ships would not be registered by Argentina Registry of ships. At present, the fragmentation of the definition and registration of ships can affect the deployment and application of remote control ships or autonomous ships. Due to the feature of shipping, which is related to the global transportation network, the definition and registration issue had better be solved at international level by International Maritime Organization (IMO). Legal issue of the seafarer International Convention on Standard of Training Certification and Watchkeeping (STCW) 1978 sets minimum qualification standard for masters, officers and watch personnel on seagoing merchant ships and large yachts. In the sight of replacing human operator on board with machine, will the convention find no application to remotely controlled or autonomous unmanned ships? The research of CMI points out the maritime law associations of Finland, Panama and United State assume that the STCW convention would likely apply to shore-based personnel as well in excepted circumstances where there is no new specific legislation. And the British maritime law association states that regardless of whether STCW would apply to unmanned operation or not, it is clear that certain provisions on training and competence would not apply to shore-based controller and other personnel. Japanese maritime association also states that although the convention does not find application to a remotely controlled unmanned ship, certain rules requiring watchkeeping officers to be presented may nevertheless arguably be interpreted to render an unmanned ship in breach of STCW and to that extent be applicable to unmanned ships. Therefore the amendment of convention seems inevitable. Standing on the other side, the Institute of Marine Engineering Science & Technology recommended that pairing human with machine effectively to enhance human intelligence and performance rather than totally replacing human is an area that should not be overlooked. Even if the application of unmanned ships comes in reality, seafarer skill will still remain an essential component in the long term future of the shipping sector. The minimum qualification of masters, officers and watch personnel may not need to be changed. Human error has been used to create a blame culture towards the workforce at sea, and it also results from poor implementation/ introduction/ preparation for new technology. Many studies show that seafarers are worried about the impact of autonomous ships. If the development of autonomous ships means replacing all the human elements on ships, people who work in marine sector will not accept those novel technologies easily, and this won’t lead to a safer future of maritime industry. Safety requirement of the remote control ship and autonomous ship Rule 8 (a) and rule 5 of the international regulation for preventing collisions at sea, 1972(COLREGS) require the operation of ships to comply with the duty of “good seamanship”, “proper lookout”. These rules are based on the operation by human, thus, leading to the following two questions: (1) Would the operation of unmanned ship contrary to the duty of “good seamanship”? The duty of good seamanship emphasizes the importance of human experiences and judgments in the operation of a vessel, and the adaptability of responses provided by good seamanship. Whether an autonomous ship would be able to reach this level of adaptive judgment would depend on the sophistication of its autonomous system. According to CMI’s research, the maritime law associations of countries including Argentina, British, Canada, China, German, Japan and Panama emphasize the requirement that autonomous ship must be at least as safe as ships operated by a qualified crew. (2) Would the proper lookout sets in rule 5 satisfied by camera and aural censoring equipment? COLREG rule 5 has two vital elements. First, crew on the bridge should pay attention to everything, not just looking ahead out of the bridge windows but looking all around the vessel, using all senses and all personnel equipment. Second, use all information continuously to assess the situation your vessel is in and the risk of collision. In this context, if the sensors and transmission equipment are sufficient to enable an appraisal of the information received in a similar manner available as if the controller was on board, then Rule 5 should be considered satisfied. However, it is unlikely that fully autonomous ship could comply with rule 5. It depends on the sophistication of its autonomous system. If the technology is unlikely at present to provide as equivalent spatial awareness and appreciation of the vessel’s positon as there are human on board, then rule 5 would not be considered fulfilled. Liability Liability is an important issue which is frequently mentioned in the area of autonomous ship. According to the study of MUNIN in 2015, liability issue of autonomous ship might arise under the following situations: (1) Deviation Suppose a ship was navigating autonomously, and the deviation of the system caused collision damage, how might liability be apportioned between ship-owner and the manufacturers? According to the research of CMI, 10 maritime law associations stated that under its domestic law, the third party may have a claim against the manufactures. (British, Canada, China, Croatia, Dutch, French, Germany, Italy, Spain, Malta) They may do so in tort if negligence on the part of manufacturers can be proved and if this can be shown to be causative of the damage. In European Union, third parties may also claim under Council Directive 85/374/EEC of 25 July 1985 on the approximation of the laws, regulations and administrative provisions of the Member State concerning liability for defective products. (2) Limitation of liability Article 1 of the 1976 convention on limitation of liability of owner of ships provides that ship-owner may limit their liability to all claims arising from any incident. The size of limitation is based upon the tonnage of the ship. Within the convention, the term ship-owner is held to include the ship’s owner, charterer, manager or operator. International conventions dealing with limitation of liability are phrased in neutral terms with regard to the presence of a master or crew; therefore, circumstances in which a ship has no person on board do not appear to undermine the operation of those conventions. (3) Bill of lading Bill of lading is a written document signed on behalf of the owner of ship in which goods are embarked, and the ship-owner acknowledges the receipt of the goods, and undertakes to deliver them at the end of voyage. Typically, the shipper will sign the bill of lading along with the owner of the cargo at the point that shipper takes carriage of the cargo in question. The bill of the lading will then be signed by the cargo’s recipient once it has reached its destination. In other words, the document accompanies the cargo all the time, and is signed by the owner, shipper and recipient. It will generally describe the nature and quantity of goods being shipped. A question arises as in the absence of a master or any crew on board the ship, how will the bill of lading be signed by ship’s master? III. Conclusion The shipping industry is a rich, highly complex and diverse industry, which has a history of both triumph and tragedy in its adoption of technology. In light of the potential for the remote and autonomous ship, and for the sake of contributing to the assurance of safe and efficient operation, it is better to understand the impact on the industry. The taxonomy of automation between human and machine is vast and complex, especially in the sector of law. Therefore, before the system can reach fully autonomy and undertake independent, our law should be ready. IV. Reference [1] Comité Maritime International, Maritime Law for Umanned Ships, 2017, available at https://comitemaritime.org/work/unmanned-ships/ (last visited Dec. 25, 2018) [2] MUNIN, D9.3: Quantitative Assessment, Oct. 10, 2015, available at http://www.unmanned-ship.org/munin/news-information/downloads-information-material/munin-papers/ (last visited Dec. 25, 2018) [3] Martime Digitalisation & Communication, MSC 100 set to review MASS regulations, Oct. 23, 2018, available at https://www.marinemec.com/news/view,msc-100-set-to-review-mass-regulations_55609.htm (last visited Dec. 25, 2018) [4] IMAREST, Autonomous Shipping-Putting the human back in the headline, April. 2018, available at https://www.imarest.org/policy-news/institute-news/item/4446-imarest-releases-report-on-the-human-impact-of-autonomous-ships (last visited Dec. 25, 2018) [5] Danish Martime Authority, Analysis of regulatory barriers to the use of autonomous ships(Final Report), Dec. 2017, available at https://www.dma.dk/Documents/Publikationer/Analysis%20of%20Regulatory%20Barriers%20to%20the%20Use%20of%20Autonomous%20Ships.pdf (last visited Dec. 25, 2018)
1.Introduction Cyber insurance is one of the effective tools to transfer cyber and IT security risk and minimize potential financial losses. Take the example of Sony’s personal information security breach, Sony made a cyber insurance claim to mitigate the losses. In Taiwan, the cyber insurance market demand was driven by Taiwan’s Personal Information Protection Act (PIPA) which was passed in April 2010 and implemented in Oct 2012. According to PIPA, a non-government agency including the natural persons, juridical persons, or group shall be liable for the damages caused by their illegal collection, processing or using of personal information or other ways of infringement on the rights of the individual whose personal information was collected, processed or used. The non-government agency may thus pay each individual NT$500 to NT$20,000 and the total compensation amount in each case may be up to NT $200 million if there is no evidence for actual damage amount. However, the cyber insurance market does not prosper as expected one hand because of the absence of incentives of insurance companies to develop and promote the cyber-insurance products and on the other hand because of the unaffordable price that deters many companies from buying the insurance. Some countries have tried to identify the incentives and barriers for the cyber insurance market and have taken some measurements to kick start its development. In this paper, the barriers for the cyber insurance market were addressed and how American government promoted this market was mentioned. Finally, suggestions on how to stimulate the cyber insurance market growth were proposed for reference. 2.What is cyber insurance? Insurance means the parties concerned agree that one party pays a premium to the other party, and the other party is liable for pecuniary indemnification for damage caused by unforeseeable events or force majeure1. Thus, the cyber insurance means the parties concerned agree that one party pays a premium to the other party, and the other party is liable pecuniary indemnification for damage caused by cyber security breach. The cyber insurance usually covers the insured's losses (or costs) and his liabilities to the third party. For example, the insured was to be liable for the damages caused by the unlawful disclosure of identifiable personal information belonging to the third party resulted from the insured's negligence. 2Typically, cyber insurance covers penalties or regulatory fines for data breaches, litigation costs and compensation arising from civil suits filed by those whose rights are infringed, direct costs to notify those whose personal data was illegal collected, processed or used and so on. 3 3.What are the barriers for cyber insurance market? Per the report made by European Network and Information Security Agency in2012, the following issues have significant influence on incentives of insurers to design and provide cyber –insurance products, including uncertainty about the extent of risk and lack of robust actuarial data, uncertainty about what risk is being insured, fast-paced nature of the use of technology, little visibility on what constitutes effective measures, absence of insurer of last resort to re-insure catastrophic risks, and perception that existing insurance already covers cyber-risks 4. In Taiwan, insurance companies face the same issues as mentioned above when they tried to develop and promote the cyber-insurance products. However, what discourages the insurance and re-insurance companies from investing in the cyber-insurance market most is the lack of accurate information to figure out the costs associated with different information security risk and thus to price the cyber insurance contract precisely. Several cases involving personal data breach did happened after Taiwan’s PIPA became effective on Oct 1th 2012, but few verdicts have been made. It is not easy to master the direct costs or losses resulting from violation of PIPA, including penalties or fines from regulator,, compensation to the parties of the civil suit who claim their personal data were unlawfully collected, processed or used, litigation costs and so on. Otherwise, indirect costs or losses such as media costs, costs to regain reputation or trust of consumers, costs of deployment of proper technical measures to prevent the data breach from happening again etc. are difficult to calculate. Therefore, it is not easy to identify the costs of information security risk and thus to calculate the premium the insured has to pay precisely. The rapid development of technology also has a negative impact on the ability of the insurers to master the types of the information security risk which shall be insured and its costs. Accompanied with the convenience and efficiency of applying new technologies into the working environment, security issues arise, too. For example, the loss or theft of mobile or portable devices may result in data breaches. In 2012, an unencrypted laptop computer with personal information and other sensitive information of one of NASA's employees was stolen from his locked vehicle and this led to thousands of NASA's workers and contractors at risk. 5And, per the report made by a NASA inspector, similar data breaches had been resulted from the lost or theft of 48 NASA laptops and mobile computing devices between April 2009 and April 2011. 6 There is no singe formula which could guarantee 100% security, but some international organizations have promulgated best practices for information security management, such as ISO 2700x standards. 7In Taiwan, Bureau of Standards, Metrology and Inspection (BSMI) which belongs to the Ministry of Economic also consulted ISO standards and announced Chinese National Standards on information security. For example, BSMI consulted ISO 27001 “Information technology – Security techniques – Information security management systems – Requirements” and then promulgated CNS27001. Theoretically, if the company who tries to buy cyber insurance policy that covers data breaches and damages to customers' data privacy can show that it has adopted and do implement the suite of security management standards well, the premium could properly be reduced because such company shall face less security risk. 8 However, it is still not easy to price the cyber insurance contract rightly because of no enough data or evidence which could approve what constitutes effective information security measures as well as no impartial, controversial or standard formula to value intangible assets like personal or sensitive information. 9 Finally, the availability of re-insurance programs plays an important role in the cyber insurance market because insurers would appeal to such program as a strategy of risk management. The lack of solid and actual data as mentioned above would discourage re-insurers from providing insurance policies that covers the insured’s losses and liabilities. Therefore, insurers may not be keen to develop and offer cyber insurance products. 4.The USA experience on developing cyber insurance market 4.1Current market status Due to the increase of the number of data breaches, cyber attacks, and civil suits filed by those whose data were illegal disclosed to third parties, more and more enterprises recognize the importance of cyber and privacy risks and turning to cyber insurance to minimize the potential finical losses. 10 However, the increased government focus on cyber security also contributed to the rapidly growth of the cyber insurance market. 11 For example, US Department of Homeland Security has been aware of the benefits of the cyber insurance, including encouraging better information security management, reducing the finical losses that a company has to face due to the data breach and so on. 12 Compared to other lines of insurance, cyber insurance market is not mature yet and is small in USA. For example, the gross premiums for medical malpractice insurance are more than 10% of that for cyber insurance market. However, the cyber insurance market certainly appears to grow rapidly. Per the survey made by Corporate Board Member & FTI Consulting, 48% of corporate directors and 55% of general counsel take highly of the issue of data security. 13And, per the report made by Marsh, there are more and more companies buying cyber insurance to cover financial losses due to the data breach or cyber attack, and the number of Marsh’s US clients purchasing cyber insurance increased 33% in 2012 over 2011. 14 4.2What contributed to the growth of the cyber insurance market in USA? Some measurements taken by the government or regulatory intervention had impacts on the incentives of companies to carry cyber insurance. CF Disclosure Guidance published by U.S. Securities and Exchange Commission in Oct 2011 mentioned that except the operation and financial risks, public companies shall disclose the cyber security risks and cyber incidents for such risks and incidents may result in severe finical losses and thus have a board impact on their financial statements. 15 And, according to the guidance, appropriate disclosures may includes risk factors and this potential costs and consequences, cyber incidents experienced or expected and theirs costs and consequences, undetected risks related to cyber incidents, and the relevant insurance coverage. 16 Such disclosure requirements triggered the demands for the cyber insurance products because cyber insurance as an effective tool to transfer financial losses or damages could be an evidence that firms are managing cyber security risks well and properly. 17 The demand for cyber-insurance products may be created by government by means of requiring government contractors and subcontractors to purchase cyber insurance under Federal Acquisition Regulations (FAR) which mentions that contractors are required by law and FAR to provide insurance for certain types of perils 18. Also, in order to sustain the covered critical infrastructure (CCI) designation, the owner of such infrastructure may need to carry cyber insurance, too. 19 On the other hand, referring to Support Anti-Terrorism by Fostering Effective Technologies Act of 2002 which requires those who provides Federal and non-Federal Government customers with a qualified/certificated anti-terrorism technologies shall obtain liability insurance of such types but the amount of such insurance shall be reasonable and will not distort the sales price of such technologies 20, the federal government tried to draw and enact legislation that provides limitations on cyber security liability 21. If it works, this could raise the incentive of insurers because amounts of potential financial losses which may be transferred to insurers are predictable. Besides, referring to Terrorism Risk Insurance Act of 2002 which established the terrorism insurance program to provide compensations to insurers who suffered the insured losses due to terrorist attacks 22, the federal government may increase the supply of cyber insurance products by means of providing compensations to insurers who suffered the insured losses due to cyber security breach or cyber attacks. 23 Otherwise, some experts and stakeholders did suggest the federal government implement reinsurance programs to develop cyber insurance programs. 24 Finally, to solve the problem of information asymmetry, the government tried to develop the legislation that could build a mechanism for information-sharing among private entities. 25 Also, it was recommended that the federal government may consider to allow insurance firms to establish an information-sharing database together so that insurers could accordingly develop better models to figure out cyber risks and price the cyber insurance contract accurately. 26 5.Suggestions and conclusion Compared to USA where 30-40 insurers offer cyber-insurance products and thus suggested that a more mature market exists 27, the cyber insurance market in Taiwan is still at the first stage of the product life cycle. Few insurers have introduced their cyber-insurance products covering the issues related to the personal information breach. Per the experience how US government developed the cyber insurance market, the following suggestion are made for reference. First, the government may consider requiring his contractors and subcontractors to carry cyber insurances. This could stimulate the demand for cyber insurance products as well as make cyber insurance prevail among private sector as an effective risk management tool. Second, the government may consider establishing re-insurance program to offer compensation to those who suffer the insured’s large losses and damages or impose limitations of the amount insured by law. However, it is undeniable that providing re-insurance program is not feasible as the government’s budget is not abundance. Finally, an information-sharing mechanism, including information on cyber attacks an cyber risks, may be helpful to solve the problem of information asymmetry. 1.Insurance Act §1 (R.O.C, 2012). 2.European Network and Information Security Agency, Incentives and barriers of the cyber insurance market in Europe , June 2012, at 8, http://www.enisa.europa.eu/activities/Resilience-and-CIIP/national-cyber-security-strategies-ncsss/incentives-and-barriers-of-the-cyber-insurance-market-in-europe. 3.Ben Berkowitz, United States: insurance-cyber insurance, C.T.L.R. 2012, 18(7), N183. 4.Supra note2, at 19-25. 5.Mathew J. Schwartz, Stolen NASA laptop had unencrypted employee data , InformationWeek, November 15, 2012 11:17 AM, http://www.informationweek.com/security/attacks/stolen-nasa-laptop-had-unencrypted-emplo/240142160;Ben Weitzenkorn, Stolen NASA laptop prompts new security rules, TechNewsDaily , November 15 2012 11:35 AM, http://www.technewsdaily.com/15482-stolen-nasa-laptop.html. 6. Irene Klotz, Laptop with NASA workers' personal data is stolen, CAPE CANAVERAL, Nov 14, 2012 8:47pm, http://www.reuters.com/article/2012/11/15/us-space-nasa-security-idUSBRE8AE05F20121115. 7.The Government of the Hong Kong Special Administrative Region , An overview of information security standards, Feb 2008, at 2, http://www.infosec.gov.hk/english/technical/files/overview.pdf;Supra note2, at 21. 8.Supra note2, at 21-22. 9.Id. 10.Id. 11.Id. 12.U.S. Department of Homeland Security, Cyber security insurance workshop readout report, Nov 2012, at 1, http://www.dhs.gov/sites/default/files/publications/cybersecurity-insurance-read-out-report.pdf. 13.John E. Black Jr., Privacy liability and insurance developments in 2012, 16 No. 9 J. Internet L. 3, 12 (2013). 14.Marsh, Number of companies buying cyber insurance up by one-third in 2012, March 14, 2013, http://usa.marsh.com/NewsInsights/MarshPressReleases/ID/29878/Number-of-Companies-Buying-Cyber-Insurance-Up-by-One-Third-in-2012-Marsh.aspx. 15.U.S. Securities and Exchange Commission, CF Disclosure Guidance: Topic No. 2 Cybersecurity, October 13, 2011, http://www.sec.gov/divisions/corpfin/guidance/cfguidance-topic2.htm. 16.Id. 17.Supra note2, at 6.(last visited Dec. 31, 2012) 18.Federal Acquisition Regulations §28.301. 19.E. Paul Kanefsky, Insuring against cyber risks: congress and president Obama weigh in, March 2012, http://www.edwardswildman.com/newsstand/detail.aspx?news=2812. 20.Support Anti-Terrorism by Fostering Effective Technologies Act of 2002 §864. 21.Supra note19. 22.Terrorism Risk Insurance Act of 2002 §103. 23.Supra note19. 24.Id. 25.Id. 26.Id. 27.Supra note2.