Volume 2: The Law of Agreements in the Digital Era
Smart contracts are written in programming languages rather than in natural languages. This might seem to insulate them from ambiguity, because the meaning of a program is determined by technical facts rather than by social ones. It does not. Smart contracts can be ambiguous, too, because technical facts depend on socially determined ones. To give meaning to a computer program, a community of programmers and users must agree on the semantics of the programming language in which it is written. This is a social process, and a review of some famous controversies involving blockchains and smart contracts shows that it regularly creates serious ambiguities. In the most famous case, The DAO hack, more than $150 million in virtual currency turned on the contested semantics of a blockchain-based smart-contract programming language.
Why do people keep their head in the sand when making data sharing decisions? There is a widespread intuition, supported by copious research, that people are inconsistent in their behavior around internet privacy. Anger about privacy scandals dominates newspaper headlines, but most people don’t change their default privacy settings, even when it’s easy to do so. New evidence confirms that this inconsistency is real, and that information avoidance helps drive the inconsistency. This raises a new question: how does information avoidance work? This paper presents a new experimental design to start unpacking how information avoidance operates. There are two main results. First, the experiment replicates existing information avoidance experiments: people who value privacy are willing to deal away their data for small money amounts if given a chance to avoid seeing the privacy consequences of their actions. Second, the experiment shows that while people are comfortable avoiding information about privacy in a passive way, they are not comfortable actively hiding it. These results show that people’s ability to keep their head in the sand is fragile: it is a preference people are not willing to exercise conspicuously.
In recent years, the insights of behavioral law and economics scholars have improved the efficacy of various forms of contract-regimes through substantive legal reforms ranging from the CARD Act to a revamped RESPA. These insights and reforms attempted to optimize consumer choice architecture and enhance overall consumer decision-making utility, primarily by a combination of new information-deployment techniques and various consumer nudges, in both standardized paper formats and online. But much more can be done to build on these insights and improve decision-making in this space – in order to maximize utility for historically marginalized groups. This Article argues that as more traditional commercial transactions move online, they can be more easily customized to directly engage consumers by directly taking into account a consumer’s race and other demographic factors. Encouraging discrimination in contract formation comes with potential barriers and costs. Certain federal and state regulations prohibit the acquisition and use of such data. Privacy experts caution against the expansive use of online tools and algorithms designed to inferentially gather such data. Consumer demand for racially customized online interactions is uncertain. And, the potential for corporate misuse of such data, to discriminate in harmful ways, is possible. But these concerns should be measured against potential market benefits and can be addressed by rigorous data analysis of completed contracts. In certain regulated consumer markets, digital platforms that would seek to acquire race data and customize contracts would be required to permit regulators to evaluate whether such contract disclosures and contract terms were discriminatory. Ultimately, in the absence of a more transparent and honest dialogue about the present acquisition and use of such information in online contracts, an unregulated market can utilize such information at will and without scrutiny – which runs the risk of harming consumers and carries unknown benefits.
The European Union’s General Data Protection Regulation (GDPR) is the most comprehensive legislation yet enacted to govern algorithmic decision-making. Its reception has been dominated by a debate about whether it contains an individual right to an explanation of algorithmic decision-making. We argue that this debate is misguided in both the concepts it invokes and in its broader vision of accountability in modern democracies. It is justification that should guide approaches to governing algorithmic decision-making, not simply explanation. The form of justification – who is justifying what to whom – should determine the appropriate form of explanation. This suggests a sharper focus on systemic accountability, rather than technical explanations of models to isolated, rights-bearing individuals. We argue that the debate about the governance of algorithmic decision-making is hampered by its excessive focus on privacy. Moving beyond the privacy frame allows us to focus on institutions rather than individuals and on decision-making systems rather than the inner workings of algorithms. Future regulatory provisions should develop mechanisms within modern democracies to secure systemic accountability over time in the governance of algorithmic decision-making systems.