To paraphrase Hardin (1968), in a finite world, this means that the per capita share of the Internet’s resources must steadily decrease. The inference to be made is that each person who uses the commons affects not only herself, but everyone (i.e. society). In economics, this cause and effect principle is expressed in reference to “externalities,” that is, that parties not privy to the decisions or actions of others will be affected by those actions in terms of either benefit or cost, that is, in positive or negative externalities (Anderson & Moore, 2006).
It is the negative externalities that concern us here; that malicious Internet traffic is a danger is not debated, though the extent of that danger is (Anderson & Fuloria (2009). Likewise, though the growing cost of malicious activity on the Internet is not debated (Cashell, Jackson, Jickling, & Webel, 2004), an equally heated debate centers around the question of liability for that malicious traffic and its negative externalities.
The Internet is a complex organism that poses difficult questions, and it is no wonder that the search for answers has over time demanded a multidisciplinary approach. It has been argued that cybersecurity is not only an economic but a technical issue (Mead, 2004), and Anderson (2009) emphasized the origins of cybersecurity as purely technological and mathematic. Hardin (1968) maintained that to some problems there are no technical solutions. But if technology is not the answer to protecting the Internet and its users, then what is? Moore (2010), on the other hand, has argued that because economics are so central to cybersecurity, that policy and legislation are the means to incentivize a solution. Even so, Rubens & Morse (2013 p. 183) caution that legislation addressing liability “has not always been well-received or fully understood.”
Anderson & Fuloria (2009) also pointed to an emerging consensus that security economics rather than technology, better protects infrastructure, while Hardin (1968) argued instead to an extension of morality for the necessary solutions. To take the analogy further, it is immoral, given the declining resources of the Internet, to use it as a “cesspool” (Hardin, 1968, p. 1245). Morality, argues Hardin, is “system-sensitive,” and we have developed administrative law to deal with ambiguities and specifics, but administrative law is itself prone to corruption and produces a government not of laws but of men, and Hardin asks, “Who shall watch the watchers themselves?” (Hardin, 1968, p. 1245-46). The recent revelation of the lawlessness of the NSA seems to be a case in point (Witte, 2014).
Where Does Liability Lie?
We live in a world where Internet access is increasingly seen as a human right, and a 2011 United Nations report stated just that (LaRue, 2011), repeating on a global scale an assertion first made in tiny Estonia in 2000 (Woodard, 2003). It is easy to commit crimes on the Internet; most malware is undetected (Moore & Anderson, 2009) and most cyber-criminals also escape detection, let alone punishment (Brenner, 2010). Understandably, as Anderson & Fuloria (2009, p. 8) say, “security is hard.” Harder still is determining where responsibility for that security lay. Software developers knowingly make available to the public vulnerable software and worry about fixing these vulnerabilities post-release. Of course, vulnerabilities can also arise from insecure networks, lax security policies, back doors, and through other causes and the argument has been made that liability exists for insecure networks as well as for insecure software (Mead, 2004). Who is liable for malicious traffic on the World Wide Web, or, in Hardin’s terms, for turning the Internet into a cesspool?
A simple answer is that security depends on many and therefore many are liable for that security, which is, after all, only as strong as its weakest link. Whereas in an enterprise network the weakest link is liable to be an employee, in the overall scope of cybersecurity, that weak link might as easily be an entity composed of many people, a dearth of sound cybersecurity policies and procedures, or a absence of or struggle over regulatory standards and laws (Anderson & Moore, 2006).
Nobody in the private sector seems eager to take responsibility for the Internet they all profit from as each blames the other, and Anderson & Fuloria (2009) rightly stress the relative powerlessness of end users, especially in the mobile phone and PC markets (industries have somewhat more clout). This is hardly a productive course when it is considered that large-scale failings of cybersecurity can shake a nation’s – or the world’s – economy, and an argument can be made for government intervention (Anderson & Fuloria, 2009).
While in the private sector responsibility seems to roll down hill, all the way to the end user or consumer, President Obama’s May 2009 decision to make the United States’ digital infrastructure a strategic national asset heightens the federal role (and that of local and state governments) in cybersecurity (White House, 2009). While promising not to interfere in the private sector’s response to cybersecurity threats in terms of standards, the president stressed closer cooperation between public and private sector. This address was, in effect, an acceptance of responsibility by government for the security of America’s information and communication networks.
All society bears the cost of infrastructure attacks but only a few can impact the security of these systems, including the public and private sectors and the people who make the computers and the software that runs them, including the operating systems. A great deal of debate has centered around the assignment of blame, whether it be internet service providers (ISPs), software developers (the people who design, write, and test the software), operating system (OS) developers (the people who develop the software required for applications to run), or end users (the people who actually use the software or operating systems in question). All of these can affect cybersecurity, but not equally. Moore & Anderson (2009) have made the point that everyone who connects an infected computer to the internet creates a negative externality, but at the same time, an end user is generally only able to use operating systems or software programs others have designed. She thus lacks the expertise to make changes to it (for good or ill) and thus bears the external cost or benefit of decisions made by the industries that make her Internet activity possible.
The Lament of the End User
The end user, while complicit, is at the bottom of the externality food chain. Lichtman (2004) has pointed out that end users who inadvertently propagate malicious software are easy to track down and suggested that they could pay their fare share for the damage done, but these people are, more often than not, unwitting victims, not criminals, and as he himself admitted, they lack the requisite sophistication to be malicious users.
To better illustrate the lackadaisical approach of ISPs, consider at the 2010 move by the Australian Internet Industry Association (IIA) to come out with a voluntary code of conduct recognizing a shared responsibility for cybersecurity by ISPs and consumers (Industry code, n.d.). But such actions, being completely voluntary, push ISPs to punish the consumer without holding the ISP accountable or providing incentive for ISPs to clean up their networks. Should they be punished then for the malicious activities of others because they are easier to catch? That is what the Australian IIA’s solution seems to suggest.
Lichtman (2004) suggested that the only practical reason to not hold these end users accountable is the issue of cost-effectiveness. On the other hand, he argued, ISPs re well-placed to counter the quantity and effectiveness of attacks and that indirect liability would force them to act appropriately (Lichtman, 2004). The problem for the end user is proving that they are not to blame for their own problems – Lichtman’s lack of sophistication and Brenner’s “sloppy online behavior” (Lichtman, 2004; Brenner, 2010, p. 34). But how realistic is it to blame the end user, who is ultimately caught between sophisticated criminals and software developers, ISPs, and operating system developers who know better but who, for a variety of reasons, don’t care?
Summary
That there is injustice in the system cannot be denied, and we might ask if Hardin (1968, p. 1247) is right in his assertion that “Injustice is preferable to total ruin.” As long as the dangers continue to be debated, software developers, operating system developers, and ISPs will continue to shy away from talk of total ruin. They are making money, after all, and as has been shown here, they have no incentive to make wholesale changes to how they do business. As has been shown here, those responsible for defending the Internet disclaim any responsibility for the failure of those defenses and spread the cost to society instead (Moore, 2010). We cannot expect them to voluntarily bear that burden: self-regulation is an oxymoron.
There is plenty of blame to go around, and it is clear that the Internet security situation as it stands now cannot be allowed to continue indefinitely. Passing the buck is not a substitute for actual solutions, as it does nothing to make the Internet safe. Disclaimers may (for now) protect corporations, but they do not protect the commons we all depend upon. Needless to say, as long as the Internet is unsafe, not only are individuals, end users and consumers, at risk, but so are corporations, vital infrastructure, and even national security. No matter how strident the protest, somebody must be held liable for the cesspool our information networks have become.
It is clear that the major players in this regard are the industries best placed to secure the Internet: the operating system developers, software developers, and ISPs, rather than the end user, who is least able to affect the safety of the products they use (Ryan, 2003). Security, like blame, rolls downhill. Security at the top will mean security at the bottom, at the level of the end user. This is not to excuse the end user, who also bears responsibility for connecting an infected computer to the Internet, but in aggregate, the weight of responsibility must lie with those with the resources to combat the problem, and that means the public and private sectors.
It has been argued that three factors drive change in the U.S.: liability, the demands of the market, and government regulation (Brenner, 2010). Mead (2004) stresses that a uniform approach to the problem of liability is itself problematic, and this, as Moore (2010) has argued, is a problem that can only be corrected through legislation. Ryan (2003) went further, pointing to the threat to the country itself, its infrastructure and economic well-being, as reason enough to legislate software liability. In speaking of liability, it is a simple fact that the government, or a single corporation, let alone an entire industry, can far more greatly affect externalities than a single end user, and here, the legal concept of downstream liability, where the source is upstream of the recipient, must not be ignored (Hallberg, Kabay, Robertson, & Hutt, 2009). The upstream waters are best patrolled by those with the resources to do so.
Based on the foregoing, it would argue be reasonable to argue for a mixed system of regulation and incentives. Regulation itself must encourage incentives while not discouraging innovation, requiring a careful balance of the two. The federal government has the most power to affect change, not based simply on the power to regulate, but on purchasing power. One might argue that banning USB devices from federal workplaces would hurt memory stick manufacturers financially, but such a move would also create an incentive for the industry to improve the security of such devices, thus driving change without the need for regulation.
The public, as has been argued, may lack sophistication but the federal government is another entity altogether. The public may buy what is there without a complete understanding of what they are getting in terms of positive and negative externalities (and indeed, they have little control over it), but the federal government, through its purchasing power, can mitigate against hardware and software that generate those negative externalities. Simply regulating itself would, by virtue of this buying power, serve to (at least in part) regulate the software industry. What incentive cannot be created through non-regulatory means must, of necessity, be created through regulation. There is no more reason to suppose these industries will voluntarily regulate themselves than is Wall Street.
References:
Anderson, R. (2009). Information security: Where computer science, economics and psychology meet. Philosophical Transactions: Mathematical, Physical and Engineering Sciences, 367(1898), 2717-2727. doi:10.1098/rsta.2009.00
Anderson, R., & Fuloria, S. (2009). Security economics and critical national infrastructure. Retrieved from http://www.cl.cam.ac.uk/~rja14/Papers/econ-cni09.pdf
Anderson, R., & Moore, T. (2006). The Economics of Information Security. Science, 314(5799), 610-613. doi:10.1126/science.1130992
Brenner, J. F. (2010). Privacy and Security Why Isn’t Cyberspace More Secure? Communications Of The ACM, 53(11), 33-35. doi:10.1145/1839676.1839688
Bronk, C., & Tikk-Ringas, E. (2013). Hack or attack? Shamoon and the evolution of cyber conflict. James A. Baker III Institute for Public Policy. Retrieved from http://www.bakerinstitute.org/publications/ITP-pub-WorkingPaper-ShamoonCyberConflict-020113.pdf
Cashell, B., Jackson, W. D., Jickling, M., & Webel, B. (2004, April 1). The economic impact of cyber-attacks. Retrieved from http://www.cisco.com/warp/public/779/govtaffairs/images/CRS_Cyber_Attacks.pdf
Department of Homeland Security (2012). Joint security awareness report: Shamoon/DistTrack malware, update B (JSAR-12-241-01B).
Greenemeier, L. (2013, February 12). When will the Internet reach its limit (and how do we stop that from happening)? Scientific American. Retrieved from http://www.scientificamerican.com/article/when-will-the-internet-reach-its-limit/
Hallberg, C., Kabay, M.E., Robertson, B., & Hutt, A.E. (2009). Management responsibilities and liabilities. In Bosworth, S., Kabay, M.E., & Whyne, E. (Eds.). (2009). Computer security handbook (5th ed). New York, NY: John Wiley & Sons.
Hardin, G. (1968, December 13). The tragedy of the commons. Science (AAAS) 162 (3859): 1243-1248. doi:10.1126/science.162.3859.1243
Industry code to take on spammers, botnets and zombies (n.d.). Internet Industry Association. Retrieved from http://iia.net.au/codes-of-practice/icode-iias-esecurity-code.htm
King, L. (2014, February 12). Bitcoin hit by ‘massive’ DDoS attack as tensions rise. Forbes. Retrieved from http://www.forbes.com/sites/leoking/2014/02/12/bitcoin-hit-by-massive-ddos-attack-as-tensions-rise/
LaRue, F. (2011, May 16). Human Rights Council, Seventeenth session Agenda item 3, United Nations General Assembly. Retrieved from http://www2.ohchr.org/english/bodies/hrcouncil/docs/17session/A.HRC.17.27_en.pdf
Lichtman, D. (2004). Holding Internet Service Providers Accountable. Regulation, 27(4), 54-59.
Mead, N. (2004). Who is liable for insecure systems? Computer, 37(7), 27-34.
Monto, G. (2010). The vulnerability of security — a dialogue with Ross Anderson. Current Science (00113891), 98(7), 892-894.
Moore, T. (2010). The economics of cybersecurity: Principles and policy options. International Journal On Critical Infrastructure Protection, 3(3/4), 103-117. doi:10.1016/j.ijcip.2010.10.002
Moore, T., Clayton, R., & Anderson, R. (2009). The Economics of Online Crime. Journal Of Economic Perspectives, 23(3), 3-20. doi:10.1257/jep.23.3.3
Osborne, B. (2003, January 24). ISPs not responsible for malicious code. Geek.com. Retrieved from http://www.geek.com/news/isps-not-responsible-for-malicious-code-552319/
Rosenblum, P. (2014, January 17). The Target data breach is becoming a nightmare. Forbes. Retrieved from http://www.forbes.com/sites/paularosenblum/2014/01/17/the-target-data-breach-is-becoming-a-nightmare/
Rubens, J. T., & Morse, E. A. (2013). Survey of the Law of Cyberspace: Introduction. Business Lawyer, 69(1), 183-187.
Ryan, D. J. (2003). Two views on security software liability: Let the legal system decide. IEEE Security and Privacy, 99(1), 70-72. Retrieved from http://www.cse.unsw.edu.au/~se4921/PDF/twoviews-a.pdf
White House Office of the Press Secretary. (2009, May 29). Remarks by the President on Securing our Nation’s Cyber Infrastructure, press release. Retrieved from http://www.whitehouse.gov/the_press_office/Remarks-by-the-President-on-Securing-Our-Nations-Cyber-Infrastructure/
Waleski, B.D. (2006). The legal implications of information security: Regulatory compliance and liability. In H. Bidgoli (Ed.), Handbook of information security, volume 2. New York, NY: John Wiley & Sons.
Witte, D. S. (2014). Privacy deleted: Is it too late to protect our privacy online? Journal Of Internet Law, 18(1), 1-28.
Woodard, E. (2003, July 1). Estonia, where being wired is a human right. The Christian Science Monitor. Retrieved from http://www.csmonitor.com/2003/0701/p07s01-woeu.html
Trump got House Republicans to not use reconciliation to cut Social Security. The problem is…
President-elect Trump and Speaker Mike Johnson have agreed to a deal that would fund the…
Donald Trump demanded that the debt limit be raised as part of the government funding…
Donald Trump and JD Vance are blaming President Biden for the havoc caused by Elon…
The first little bit of pressure involving passing a bill to keep the government open…
X boss Elon Musk is throwing a tantrum on his social media platform as House…
This website uses cookies.