An Internet of Torts

DRAFT MANUSCRIPT

The proliferating internet-connected devices that constitute the “Internet of Things” (IoT) grant companies unprecedented control over consumers’ use of property, often at the expense of consumer safety. Companies can now remotely alter or deactivate items, a practice that foreseeably causes harm when an otherwise operational car, alarm system, or medical device abruptly ceases to function.

Even as the potential for harm escalates, contract and tort law work in tandem to shield companies from liability. Exculpatory clauses and liability restrictions limit damages; disclaimers and express warranties displace warranty claims; and contractual notice of remote interference precludes common law tort suits. Meanwhile, absent a better understanding of how IoT-enabled injuries operate, judges are likely to apply products liability and negligence standards in ways that relieve IoT companies of liability. In short, this 21st century version of harmful remote action is not adequately addressed by our 20th century civil liability regime.

But while this modern problem may appear novel, the story fits a larger, familiar pattern of new technologies that alter social and power relations between industry and individuals, provoking legal evolution. Civil liability standards regularly change in the wake of technological development: the Industrial Revolution and the associated rise of “stranger cases” prompted courts to broaden the definition of negligence; the rise of mass production and newly-attenuated seller/buyer relationships helped spur the products liability revolution. Similarly, this Article proposes reforms to unconscionability, duty, and causation standards to increase accountability, limit industry overreach, and advance efficiency and fairness. Once again, it is necessary to expand civil liability to address new technologically-enabled conduct and to correct an attendant power imbalance.


International Cybertorts: Expanding State Accountability in Cyberspace

103 Cornell L. Rev. 565 (2018)

The Sony and DNC hacks exposed a significant gap in the international law of cyberspace: there is no mechanism for holding states accountable for the injuries associated with their costly and invasive cyberoperations. International law lacks a vocabulary for certain kinds of harmful acts in cyberspace, and in the absence of appropriate terminology and legal guidelines, victim states have few effective and non-escalatory responses to these increasingly common and harmful cyberoperations.

This article constructs a comprehensive regime of state accountability for harmful actions in cyberspace, grounded in both state liability for acts with injurious consequences and in state responsibility for internationally wrongful acts. First, this Article identifies a new class of cyberoperations—transnational cybertorts—and distinguishes them from transnational cybercrime, cyberespionage, and cyberwarfare based on the nature and scope of their respective harms. Second, it discusses the special case of cyberintervention and the importance of developing cyber-specific rules regarding the age-old prohibition on coercive interference. In addition to permitting the construction of tailored and non-escalatory remedies, this approach suggests new solutions to long-vexing issues, such as how to categorize data destruction attacks and better deter cyberespionage.


Autonomous Weapon Systems and the Limits of Analogy

9 Harv. Nat'l Sec. J. 51 (2018)

Most imagine autonomous weapon systems as more independent versions of weapons already in use or as some kind of humanoid robotic soldier. But every analogy is false in some way, and all potential analogies for autonomous weapon systems—weapon, combatant, child solider, animal—misrepresent legally-salient traits and limit our ability to think imaginatively about this technology and anticipate how it might further develop, impeding our ability to properly regulate it. As is often the case when law by analogy cannot justifiably stretch extant law to address novel legal questions, new law is needed. The sooner we escape the confines of these insufficient analogies, the sooner we can create appropriate and effective regulations for autonomous weapon systems.


War Torts: Accountability for Autonomous Weapons

164 U. Pa. L. Rev. 1347 (2016)

Unlike conventional weapons or remotely operated drones, autonomous weapon systems can independently select and engage targets. As a result, they may take actions that look like war crimes without any individual acting intentionally or recklessly. Absent such willful action, no one can be held criminally liable under existing international law.

Criminal law aims to prohibit certain actions, and individual criminal liability allows for the evaluation of whether someone is guilty of a moral wrong. Given that a successful ban on autonomous weapon systems is unlikely (and possibly even detrimental), what is needed is a complementary legal regime that holds states accountable for the injurious wrongs that are the side effects of employing these uniquely effective but inherently unpredictable and dangerous weapons. Just as the Industrial Revolution fostered the development of modern tort law, autonomous weapon systems highlight the need for “war torts”: serious violations of international humanitarian law that give rise to state responsibility.


Change Without Consent: How Customary International Law Modifies Treaties

41 Yale J. Int'l L. 237 (2016)

Despite much recent scholarly focus on how outdated treaties might be updated, surprisingly little attention has been paid to an alternative route of treaty evolution: modification by subsequently-developed customary international law. This article demonstrates that such modification occurs; argues for recognition of its legitimacy; and highlights how it may result in more consensus-respecting action than arguments grounded in consent-based forms of treaty modification.


Consent is Not Enough: Respecting the Intensity Threshold in Transnational Conflict

165 U. Pa. L. Rev. 1 (2016) (with Oona A. Hathaway, Daniel Hessel, Julia Shu, and Sarah Weiner)

It is widely accepted that a state cannot treat a conflict with an organized non-state actor as an armed conflict until the violence crosses a minimum threshold of intensity. But can a host state consent to a use of force that would be illegal for the host state to use itself? And can an intervening state always presume that host state consent is valid?

This article argues that host state consent is limited and that intervening states cannot treat consent as a blank check. Accordingly, even in consent-based interventions, the logic and foundational norms of the international legal order require both the consent-giving and consent-receiving states to independently evaluate which legal regime governs—which will often turn on whether the intensity threshold has been met. If a non-international armed conflict exists, the intervening state may act pursuant to international humanitarian law; if not, its lawful actions are limited by its own and the host state’s human rights obligations.


The Killer Robots Are Here: Legal and Policy Implications

36 Cardozo L. Rev. 1837 (2015)

Notwithstanding increasing state interest in the issue, no one has yet put forward a coherent legal definition of autonomy in weapon systems, resulting in a confusing conflation of legal, ethical, policy, and political arguments. This article proposes that an “autonomous weapon system” be defined as “a weapon system that, based on conclusions derived from gathered information and preprogrammed constraints, is capable of independently selecting and engaging targets.” Contrary to the general consensus, such systems are not weapons of the future: they exist and are in use today.

This fact has two main implications: it undermines almost all legal arguments for a ban, as they are based on the false assumption that such weaponry could never be lawfully employed; and it significantly reduces the likelihood that a successful ban will be negotiated, as states will be reluctant to voluntarily relinquish otherwise lawful and uniquely effective weaponry. Accordingly, this article discusses how best to create successful international regulations.


The Law of Cyber-Attack

100 Calif. L. Rev. 817 (2012) (with Oona A. Hathaway, Philip Levitz, Haley Nix, Aileen Nowlan, William Perdue, and Julia Spiegel)

Cyber-attacks pose a growing threat to national security and international peace. However, while often addressed in the context of humanitarian law, cyber-attacks bear little resemblance to traditional forms of warfare. This article clarifies the definition of cyber-attack and describes how such attacks are currently regulated under the law of war, international treaties, and domestic criminal law. Concluding that existing legal regimes address only a small fraction of potential challenges raised by cyber-attacks, this article proposes how international and domestic law might be adapted or created to more effectively regulate them.


Which Law Governs During Armed Conflict? The Relationship Between International Humanitarian Law and Human Rights Law

96 Minn. L. Rev. 1883 (2012) (with Oona A. Hathway, Philip Levitz, Haley Nix, William Perdue, Chelsea Purvis, and Julia Spiegel)

Although international human rights and humanitarian law share common roots in their respective efforts to protect human dignity, the two bodies of law appear to have incompatible requirements in armed conflicts. This Article draws on jurisprudence, state practice, and scholarship to describe three approaches to evaluating what is lawful in armed conflicts, explores the consequences of the various applications, and recommends that the United States employ interpretive strategies to minimize discrepancies. In situations where states’ obligations remain irreconcilable, the Article endorses a “specificity decision rule” to determine the applicable legal regime.