Skip to main content area Skip to main content area Skip to institutional navigation Skip to search Skip to section navigation

Research

Anita Allen

Shyamkrishna Balganesh

Cary Coglianese

Gideon Parchomovsky

R. Polk Wagner

Christopher Yoo

  • Author: Christopher Yoo Citation: Despite the Supreme Courts rejection of common law copyright in Wheaton v. Peters and the more specific codification by the Copyright Act of 1976, courts have continued to play an active role in determining the scope of copyright. Four areas of continuing judicial innovation include fair use, misuse, third-party liability, and the first sale doctrine. Some commentators have advocated broad judicial power to revise and overturn statutes. Such sweeping judicial power is hard to reconcile with the democratic commitment to legislative supremacy. At the other extreme are those that view codification as completely displacing courts authority to develop legal principles. The problem with this position is that not all codifications are intended to be comprehensive or to displace all preexisting law.One way to reconcile the democratic legitimacy with current practice would be to adopt a less categorical approach that recognizes that the proper scope for judicial development is itself a question of legislative intent. In some cases, Congress has affirmatively delegated to the courts the explicitly authority to continue to develop the law. In others, Congress modeled certain provisions of the copyright statutes on patent or other areas of law, which provides leeway for judicial development. Either approach would not conflict with the democratic commitments reflected in legislative supremacy.Applying this framework to the four areas of law of judicial development identified above reveals that the courts record in applying these principles consistently is mixed. With respect to fair use and misuse, the courts have adopted readings that either follow or are consistent with legislative intent. With respect to third-party liability and the first sale doctrine, the courts have invoked broad analogies between copyright and patent law or canons of construction without analyzing directly whether such approaches were consistent with legislative intent.
  • Author: Christopher Yoo Citation: The Communications Act of 1934 created a dual review process in which mergers in the communications industry are reviewed by the Federal Communications Commission (FCC) as well as the antitrust authorities. Commentators have criticized dual review not only as costly and redundant, but also as subject to substantive and procedural abuse. The process of clearing the 2011 Comcast-NBC Universal merger provides a useful case study to examine whether such concerns are justified. A review of the empirical context reveals that the FCC intervened even though the relevant markets were not structured in a way that would ordinarily raise anticompetitive concerns. In addition, the FCC was able to use differences between its review process and that used by the Justice Department to extract concessions from the merging parties that had nothing to do with the merger and which were more properly addressed through general rulemaking. Moreover, the use of voluntary commitments also allowed the FCC to avoid subjecting certain aspects of its decision to public comment and immunized it from having to offer a reasoned explanation or subjecting its decision to judicial review. The aftermath of the merger provides an opportunity to assess whether the FCCs intervention yielded consumer benefits.
  • Author: Christopher Yoo Citation: This book review takes a critical review of the claim advanced by Susan Crawford in Captive Audience that the merger between Comcast and NBC Universal would harm consumers and that policymakers should instead promote common carriage regulation and subsidize municipal symmetrical gigabit fiber-to-the-home (FTTH). First it evaluates the extent to which next-generation digital subscriber lines (DSL) and wireless broadband technologies can serve as effective substitutes for cable modem service, identifying FCC data showing that the market has become increasingly competitive and likely to continue to do so. Furthermore, the market is not structured in a way that would permit the combination between content and conduit to harm competition. Furthermore, past attempts to recalibrate the balance between content producers and distribution channels have had the unintended consequence of reducing incentives to invest in network infrastructure. It can also deter technical leadership and innovation within the communications platform itself.
  • Author: Christopher Yoo Citation: Scholars have spent considerable effort determining how the law of war (particularly jus ad bellum and jus in bello) applies to cyber conflicts, epitomized by the Tallinn Manual on the International Law Applicable to Cyber Warfare. Many prominent cyber operations fall outside the law of war, including the surveillance programs that Edward Snowden has alleged were conducted by the National Security Agency, the distributed denial of service attacks launched against Estonia and Georgia in 2007 and 2008, the 2008 Stuxnet virus designed to hinder the Iranian nuclear program, and the unrestricted cyber warfare described in the 1999 book by two Chinese army colonels. Such conduct is instead relegated to the law of espionage and is thus governed almost entirely by domestic law. The absence of an overarching international law solution to this problem heightens the importance of technological self-protective measures.
  • Authors: Michael Janson , Christopher Yoo Citation: One of the most distinctive characteristics of the U.S. telephone system is that it has always been privately owned, in stark contrast to the pattern of government ownership followed by virtually every other nation. What is not widely known is how close the United States came to falling in line with the rest of the world. For the one-year period following July 31, 1918, the exigencies of World War I led the federal government to take over the U.S. telephone system. A close examination of this episode sheds new light into a number of current policy issues. The history confirms that natural monopoly was not solely responsible for AT&Ts return to dominance and reveals that the Kingsbury Commitment was more effective in deterring monopoly than generally believed. Instead, a significant force driving the re-monopolization of the telephone system was the U.S. Postmaster General, Albert Burleson—not Theodore Vail, President of AT&T. It also demonstrates that universal service was the result of government-imposed emulation of the postal system, not, as some have claimed, a post hoc rationalization for maintaining monopoly. The most remarkable question is, having once obtained control over the telephone system, why did the federal government ever let it go? The dynamics surrounding this decision reveal the inherent limits of relying on war to justify extraordinary actions. More importantly, it shows the difficulties that governments face in overseeing industries that are undergoing dynamic technological change and that require significant capital investments.
  • Author: Christopher Yoo Citation: [No abstract on file]
  • Authors: Daniel Spulber , Christopher Yoo Citation: Network industries, including the Internet, have shown significant growth, substantial competition, and rapid innovation. This Chapter examines antitrust policy towards network industries. The discussion considers the policy implications of various concepts in the economics of networks: natural monopoly, network economic effects, vertical exclusion, and dynamic efficiency. Our analysis finds that antitrust policy makers should not presume that network industries are more subject to monopolization than other industries. We find that deregulation and the strength of competition in network industries have removed justifications for structural separation as a remedy. Also, we argue that that deregulation and competition have effectively eliminated support for application of the essential facilities doctrine. Antitrust policy in network industries should be guided by considerations of dynamic efficiency.
  • Author: Christopher Yoo Citation: The D.C. Circuits January 2014 decision in Verizon v. FCC represented a major milestone in the debate over network neutrality that has dominated communications policy for the past decade. This article analyzes the implications of the D.C. Circuits ruling, beginning with a critique of the courts ruling that section 706 of the Telecommunications Act of 1996 gave the Federal Communications Commission (FCC) the authority to mandate some form of network neutrality. Examination of the statutes text, application of canons of construction such as ejusdem generis and noscitur a sociis, and a perusal of the statutes legislative history all raise questions about the propriety of the courts conclusion. Moreover, the precedents on ancillary jurisdiction and common carriage impose limits to the FCCs section 706 jurisdiction, preventing the FCC from regulating content before or after it is in transit and likely barring the FCC from imposing a strict nondiscrimination mandate. A revised rule based on commercial reasonableness as initially proposed by the FCC could accomplish many of the goals of network neutrality without running afoul of these prohibitions. Reclassification of broadband Internet access to bring it within the regulatory regime governing traditional telephone service (known as Title II) faces substantial statutory obstacles, would not prevent prioritization of services, and ignores the longstanding problems associated with common carriage regulation and forbearance. The legislative history of section 706 also suggests that the FCC has the authority to preempt the concurrent jurisdiction accorded to state retaliatory authorities. Moreover, calls to extend network neutrality to interconnection between networks overlooks the fact that such arrangements are not universal and instead are based on some type of reciprocity and that requiring zero-price interconnection would ignore the important role played by prices and by bilateral negotiations. The article closes by examining five early examples of network neutrality disputes: MetroPCS/YouTube, AT&T/Apple FaceTime, Verizon/Google tethering apps, Verizon/Google Wallet, and the Amazon Kindle/zero-rating programs. These cases demonstrate the difficulties surrounding the implementation of network neutrality rules.
  • Authors: Daniel Spulber , Christopher Yoo Citation: [No abstract on file]
  • Author: Christopher Yoo Citation: [No abstract on file]
  • Author: Christopher Yoo Citation: Since the Internet burst into the publics consciousness during the mid-1990s, it has transformed almost every aspect of daily life. At that time, the economic and technological environment surrounding the Internet remained relatively simple: a small number of users ran a handful of applications over a narrow range of technologies interconnected by a simple set of business relationships.Time has undermined each of these premises. The population of end users has grown exponentially and become increasingly diverse. The applications that dominated the early Internet—email and web browsing—have been joined by new applications such as video and cloud computing that place much greater demands on the network. Wireless broadband and fiber optics have emerged as important alternatives to transmission services provided via legacy telephone and cable television systems, and mobile devices are replacing personal computers as the dominant means for accessing the Internet. At the same time, the networks comprising the Internet are interconnecting through a wider variety of locations and economic terms than ever before.These changes are placing pressure on the Internets architecture to evolve in response. The Internet is becoming less standardized, more subject to formal governance, and more reliant on intelligence located in the core of the network. At the same time, Internet pricing is becoming more complex, intermediaries are playing increasingly important roles, and the maturation of the industry is causing the nature of competition to change. Moreover, the total convergence of all forms of communications into a single network predicted by many observers may turn out to be something of a myth.In short, policymakers and scholars must replace the static view that focuses on the Internets past with a dynamic view flexible enough to permit the Internet to evolve to meet the changing needs of the future.
  • Author: Christopher Yoo Citation: [No abstract on file]
  • Author: Christopher Yoo Citation: During the course of the network neutrality debate, advocates have proposed extending common carriage regulation to broadband Internet access services. Others have endorsed extending common carriage to a wide range of other Internet-based services, including search engines, cloud computing, Apple devices, online maps, and social networks. All too often, however, those who focus exclusively on the Internet era pay too little attention to the lessons of the legacy of regulated industries, which has long struggled to develop a coherent rationale for determining which industries should be subject to common carriage. Of the four rationales for determining the scope of common carriage—whether industry players (1) hold themselves out as serving all comers, (2) are affected with a public interest, (3) are natural monopolies, or (4) offer transparent transmission capability between points of the customers choosing without change—each has been discredited or is inapplicable to Internet-based technologies.Moreover, common carriage has long proven difficult to implement. Nondiscrimination is difficult to enforce when products vary in terms of quality or cost and forecloses demand-side price discrimination schemes (such as Ramsey pricing) that can increase economic welfare. In addition, the academic literature has long noted that the obligation to keep rates reasonable is difficult to apply, has trouble accommodating differences in quality, provides weak incentives to economize, creates systematic biases toward inefficient solutions, raises difficult questions about how to allocate common costs, deters innovation, and requires collusion by creating entry barriers, standardizing products, pooling information, providing advance notice of any pricing changes, and allowing the government to serve as the cartel enforcer. Three historical examples—early local telephone companies known as competitive access providers, the detariffing of business services, and Voice over Internet Protocol—provide concrete illustrations of how refraining from imposing common carriage regulation can benefit consumers.
  • Authors: Daniel Spulber , Christopher Yoo Citation: [No abstract on file]
  • Author: Christopher Yoo Citation: Modularity is often cited as one of the foundations for the Internets success. Unfortunately, academic discussions about modularity appearing in the literature on Internet policy are undertheorized. The persistence of nonmodular architectures for some technologies underscores the need for some theoretical basis for determining when modularity is the preferred approach. Even when modularity is desirable, theory must provide some basis for making key design decisions, such as the number of modules, the location of the interfaces between the modules, and the information included in those interfaces. The literature on innovation indicates that modules should be determined by the nature of task interdependencies and the variety inherent in the external environment. Moreover, modularity designs interfaces to ensure that modules operate independently, with all information about processes that adjacent modules should not take into account being hidden within the module. These insights in turn offer a number of important implications. They mark a return to a more technological vision of vertical integration that deviates from the transaction-cost oriented vision that now dominates the literature. They also reveal how modularity necessarily limits the functionality of any particular architecture. In addition, although the independence fostered by modularity remains one of its primary virtues, it can also create coordination problems in which actors operating within each module optimize based on local conditions in ways that can lead to suboptimal outcomes for the system as a whole. Lastly, like any design hierarchy, modular systems can resist technological change. These insights shed new light on unbundling of telecommunications networks, network neutrality, calls for open APIs, and clean-slate redesign proposals.
  • Author: Christopher Yoo Citation: To most social scientists, the technical details of how the Internet actually works remain arcane and inaccessible. At the same time, convergence is forcing scholars to grapple with how to apply regulatory regimes developed for traditional media to a world in which all services are provided via an Internet-based platform. This chapter explores the problems caused by the lack of familiarity with the underlying technology, using as its focus the network neutrality debate that has dominated Internet policy for the past several years. The analysis underscores a surprising lack of sophistication in the current debate. Unfamiliarity with the Internets architecture has allowed some advocates to characterize prioritization of network traffic as an aberration, when in fact it is a central feature designed into the network since its inception. The lack of knowledge has allowed advocates to recast pragmatic engineering concepts as supposedly inviolable architectural principles, effectively imbuing certain types of political advocacy with a false sense of scientific legitimacy. As the technologies comprising the network continue to change and the demands of end users create pressure on the network to further evolve, the absence of technical grounding risks making the status quo seem like a natural construct that cannot or should not be changed.
  • Author: Christopher Yoo Citation: [No abstract on file]
  • Author: Christopher Yoo Citation: As the Internet becomes more important to the everyday lives of people around the world, commentators have tried to identify the best policies increasing the deployment and adoption of high-speed broadband technologies. Some claim that the European model of service-based competition, induced by telephone-style regulation, has outperformed the facilities-based competition underlying the US approach to promoting broadband deployment. The mapping studies conducted by the US and the EU for 2011 and 2012 reveal that the US led the EU in many broadband metrics. •High-Speed Access: A far greater percentage of US households had access to Next Generation Access (NGA) networks (25 Mbps) than in Europe. This was true whether one considered coverage for the entire nation (82% vs. 54%) or for rural areas (48% vs. 12%). •Fiber Deployment: The US had better coverage for fiber-to-the-premises (FTTP) (23% vs. 12%). Furthermore, FTTP remained a less important contributor to NGA coverage than other technologies. •Regression Analysis of Key Policy Variables: Regressions built around the mapping date indicate that the US emphasis on facilities-based competition has proven more effective in promoting NGA coverage than the European emphasis on infrastructure sharing and service-based competition.•Investment: Other data indicate that the US broadband industry has invested more than two times more capital per household than the European broadband industry every year from 2007 to 2012. In 2012, for example, the US industry invested US$ 562 per household, while EU providers invested only US$ 244 per household.•Download Speeds: US download speeds during peak times (weekday evenings) averaged 15 Mbps in 2012, which was below the European average of 19 Mbps. There was also a disparity between the speeds advertised and delivered by broadband providers in the US and Europe. During peak hours, US actual download speeds were 96% of what was advertised, compared to Europe where consumers received only 74% of advertised download speeds. The US also fared better in terms of advertised vs. actual upload speeds, latency, and packet loss. •Pricing: The European pricing study reveals that US broadband was cheaper than European broadband for all speed tiers below 12 Mbps. US broadband was more expensive for higher speed tiers, although the higher cost was justified in no small part by the fact that US Internet users on average consumed 50% more bandwidth than their European counterparts.Case studies of eight European countries (Denmark, France, Germany, Italy, the Netherlands, Spain, Sweden, and the United Kingdom) confirm that facilities-based competition has served as the primary driver of investments in upgrading broadband networks. Moreover, the countries that emphasized FTTP had the lowest NGA coverage rates in this study and ranked among the lowest NGA coverage rates in the European Union. In fact, two countries often mentioned as leaders in broadband deployment (Sweden and France) end up being rather disappointing both in terms of national NGA coverage and rural NGA coverage. These case studies emphasize that broadband coverage is best promoted by a flexible approach that does not focus exclusively on any one technology.
  • Author: Christopher Yoo Citation: Standard essential patents have emerged as a major focus in both the public policy and academic arenas. The primary concern is that once a patented technology has been incorporated into a standard, the standard can effectively insulate it from competition from substitute technologies. To guard against the appropriation of quasi-rents that are the product of the standard setting process rather than the innovation itself, standard setting organizations (SSOs) require patentholders to disclose their relevant intellectual property before the standard has been adopted and to commit to license those rights on terms that are fair, reasonable, and non-discriminatory (FRAND). To date courts and commentators have provided relatively little guidance as to the meaning of FRAND. The most common approach is to impose a uniform royalty based on a percentage over overall revenue. The baseline for setting this uniform royalty is the royalty that the patentholder could have charged had the standard had not been created. In essence, this approach takes the ex ante distribution of entitlements as given and attempts to ensure that the standard setting process does not increase patentholders bargaining power. However, comparisons to the ex ante baseline do not provide a basis for assessing whether the resulting outcome would maximize economic welfare.Fortunately, public goods economics can provide an analytical framework for assessing whether a particular licensing structure is likely to maximize economic welfare. Although it is often observed that patentable inventions are public goods, key concepts of public good economics (such as the Samuelson condition that provides public good economics key optimality criterion) are rarely explored in any depth. A close examination of public good economics reveals that it has important implications standard essential patents and FRAND. The resulting framework surpasses the current approach by providing a basis for assessing whether any particular outcome is likely to maximize welfare instead of simply taking the existing distribution of entitlements as given and allocating them in the most efficient way. In addition, the insight that demand-side price discrimination is a necessary precondition to efficient market provision suggests that economic welfare would be maximized if holders of standard essential patents were permitted to charge nonuniform royalty rates. At the same time, the optimal level of price discrimination would allow consumers to retain some of the surplus. It also underscores that the fundamental problem posed by standard essential patents may be strategic behavior and incentive incompatibility. The literature also suggests several alternative institutional structures that can help mitigate some of these concerns.
  • Author: Christopher Yoo Citation: [No abstract on file]
  • Authors: Adam Finkel , Christopher Yoo Citation: One of the most controversial issues among legal academics is the extent to which constitutional interpretation should adjust to reflect contemporary values. What has received less attention is the extent which changes in constitutional interpretation are driven not by shifts in political mores, but rather by new developments in technology. This chapter provides three examples of how technological change can affect law. First, technology can undercut existing law, as demonstrated by how the shift to digital and Internet-based transmission has undermined the rationales traditional for applying a relaxed standard of free speech to broadcast television. Second, technology can create pressure to modify existing law, illustrated by how protections against search and seizure have adapted to the advent of global positioning systems (GPS). Third, technology can provoke the creation of new constitutional rights, exemplified by the German courts recognition of constitutional protection against remote online searches of computers. These developments also raise important questions whether courts or legislatures are better suited to deal with this fast-changing environment.
  • Author: Christopher Yoo Citation: Much of the current debate over Internet policy is framed by the belief that there has always been a single Internet that was open to everyone. Closer inspection reveals a number of important ways in which the architecture has deviated from this commitment. Providers frequently deploy Voice over Internet Protocol (VoIP) and Internet Protocol Television (IPTV) over hybrid networks that reserve bandwidth or employ technologies such as MultiProtocol Label Switching (MPLS) that are not fully accessible to the public Internet. At the same time, the increasing value in variety and decreasing returns to scale is mitigating the value of being connected to a single network, and the growth of multihoming, in which subscribers maintain multiple connections, is contradicting the myth of the one screen that presumes that every connection must be everything to everyone. Finally, large customers who are unable to receive the services that they need use exit as an option by turning to private networking. These developments counsel against maintaining a one-size-fits-all approach to Internet policy that may not reflect current realities.
  • Author: Christopher Yoo Citation: [No abstract on file]
  • Author: Christopher Yoo Citation: [No abstract on file]
  • Author: Christopher Yoo Citation: The technological context surrounding the Supreme Courts landmark decision in FCC v. Pacifica Foundation allowed the Court to gloss over the tension between two rather disparate rationales. Those adopting a civil libertarian view of free speech could support the decision on the grounds that viewers and listeners inability to filter out unwanted speech exposed them to content that they did not wish to see or hear. At the same time, Pacifica also found support from those who more paternalistically regard indecency as low value (if not socially harmful) speech that is unworthy of full First Amendment protection. The arrival of filtering technologies has introduced a wedge between those who supported the constitutionality of indecency regulations out of a desire to enhance individual autonomy and those who wish to restrict speech in order to promote a particular vision of the public good. At the same time, commentators on the political left have begun to question whether continued support for the classic liberal vision of free speech may be interfering with the advancement of progressive values. This Article offers a qualified defense of the libertarian vision of free speech. Deviating from the civil libertarian view would require a revolution in doctrine and would contradict the postulate of independent moral agency that lies at the heart of liberal theory. Although some suggested institutions for ascertaining the idealized preferences that individuals ought to have could justify allowing the government to override individuals actual preferences, such an approach is all-too reminiscent of the Rousseauian not of being forced to be free and has never been accepted by the Supreme Court. Finally, claims that private censorship presents risks commensurate with public censorship fail to address the fact that liberal theory presupposes the existence of a private sphere into which the state cannot intrude, as well as the long tradition recognizing the special dangers associated with the coercive power of the state. Moreover, the rationales upon which the Supreme Court has relied to justify overriding individual preferences in broadcasting and cable have been undermined by technological change.
  • Author: Christopher Yoo Citation: [No abstract on file]
  • Author: Christopher Yoo Citation: [No abstract on file]
  • Author: Christopher Yoo Citation: [No abstract on file]
  • Author: Christopher Yoo Citation: [No abstract on file]
  • Author: Christopher Yoo Citation: [No abstract on file]
  • Author: Christopher Yoo Citation: [No abstract on file]
  • Author: Christopher Yoo Citation: In addition to prompting the development of the Coase Theorem, Ronald Coases landmark 1959 article on The Federal Communications Commission touched off a revolution in spectrum policy. Although one of Coases proposed reforms (that spectrum should be allocated through markets) has now become the conventional wisdom, his other principal recommendation (that governments stop dedicating portions of the spectrum to particular uses) has yet to be fully embraced. Drawing on spectrum as well as Internet traffic and electric power as examples, this Article argues that emerging technologies often reflect qualities that make defining property rights particularly difficult. These include the cumulative nature of interference, the presence of significant interdependencies, and the presence of significant geographic discontinuities in interference patterns, exacerbated by the localized nature of information. These technological considerations define the natural boundaries of property by creating transaction-free zones that must be encompassed within a single parcel. They also complicate defining property rights by making it difficult to identify and attribute harm to particular sources of interference. These challenges can make governance a more attractive solution than exclusion.Other commentators have suggested that the failure of creating well-defined property rights in spectrum support wider use of open access regimes, citing the work of Elinor Ostrom and Michael Heller, or arguing that spectrum is not scarce. Ostroms work points out that governance of common property requires features that are quite inconsistent with open access, including a finely tailored and unequal allocation mechanism, strict internal monitoring, strong property protection to prevent outside interference, stability, and homogeneity. Hellers theory of the anticommons is sometimes misinterpreted as being hostile towards property. Instead, it is better understood as condemning giving exclusionary rights in the same piece of property to multiple owners, all of whom must agree on any major decision. The primary solution to the anticommons is not open access, but rather unitization of the interests in a single owner. Moreover, bargaining over an anticommons is also properly modeled through the chicken (or snowdrift) game, which has more of a zero-sum, all-or-nothing quality, rather than opportunities for cooperation frustrated by a lack of trust that characterize the prisoners dilemma and traditional holdout behavior. The final argument, that spectrum is not scarce, simply cannot be squared with Shannons Law.Instead the solution may lie in reconfiguring rights to increase owners ability to bargain towards workable solutions. A market maker controlling sufficient property and able to integrate local information could design a mechanism that can solve some of these problems. Property could also be reconfigured to provide more of the primitives needed to write effective contracts. Finally, these challenges, as well as the need to reduce information costs on third parties, provide an explanation for the persistence of use restrictions. In addition, continuing the fiction of government ownership of the spectrum may make it easier to reconfigure rights when necessary.
  • Author: Christopher Yoo Citation: The deployment of telecommunications services in Korea represents one of the great technological success stories of the developing world. In a remarkably brief period, the penetration of local telephone service, wireless telephony, and broadband technologies has soared to among the highest levels in the world. The history of Korean telecommunications thus provides a useful case study for other developing countries seeking to expand and modernize their telecommunications infrastructures. At first blush, the explosive growth of telecommunications services has appeared to go hand in hand with the liberalization of Korea’s telecommunications markets. A review of the history of Korean telecommunications reform reveals that the market liberalization that did exist was largely the result of foreign pressure. Moreover, although Korea took steps towards liberalizing its telecommunications markets, culminating with the substantial reforms announced in 1995, it has backslid since that time, allowing the industry to engage in a disturbing degree of re-concentration. As a result, Korea has not received the full benefit of the enhancements to consumer welfare, efficiency, and innovation that traditionally result from competition. It also suggests that, notwithstanding pronouncements to the contrary, the traditional pattern of direct governmental involvement in industrial policy remains firmly in place.
  • Author: Christopher Yoo Citation: Social networks are among the most dynamic forces on the Internet, increasingly displacing search engines as the primary way that end users find content and garnering headlines for their controversial stock offerings. In what may be considered a high-technology rite of passage, social networking companies are now facing monopolization claims under the antitrust laws. This Article evaluates the likely success of these claims, identifying considerations in network economics that may mitigate a finding or market power and evaluating whether a social networks refusal to facilitate data portability can constitute exclusionary conduct. It also analyzes two early private antitrust law cases against social networking sites: LiveUniverse v. MySpace and Facebook v. Power Ventures. These analytical considerations and early case underscore the importance of requiring that antitrust claims be asserted in terms of a coherent economic theory backed by empirical evidence. Permitting looser assertions of anticompetitive conduct risks protecting competitors instead of competition.
  • Author: Christopher Yoo Citation: Personhood theory is almost invariably cited as one of the primary theoretical bases for copyright. The conventional wisdom, which typically invokes the work of Immanuel Kant and Georg Wilhelm Friedrich Hegel as its philosophical foundation, views creative works as the embodiment of their creators personality. This unique connection between authors and their works justifies giving authors property interests in the results of their creative efforts. This Essay argues that the conventional wisdom is fundamentally flawed. It is inconsistent both with Kants and Hegels theories about the relationship between property and personality and with their specific writings about the unauthorized copying of books. It also adopts too narrow a vision of the ways that creativity can develop personality by focusing exclusively on the products of the creative process and ignoring the self-actualizing benefits of the creative process itself. German aesthetic theory broadens the understanding of the interactions between creativity and personality. Psychologists, aestheticians, and philosophers have underscored how originating creative works can play an important role in self-actualization. When combined with the insight creative works frequently borrow from the corpus of existing works, this insight provides a basis for this insight provides a basis for broadening fair use rights. Moreover, to the extent that works must be shared with audiences or a community of like-minded people in order to be meaningful, it arguably supports a right of dissemination. The result is a theory that values the creative process for the process itself and not just for the artifacts it creates, takes the interests of follow-on authors seriously, and provides an affirmative theory of the public domain. The internal logic of this approach carries with it a number of limitations, specifically that any access rights be limited to uses that are noncommercial and educational and extend no farther than the amount needed to promote self-actualization.
  • Author: Christopher Yoo Citation: For the past several decades, U.S. policymakers and the courts have charged a largely deregulatory course with respect to telecommunications. During the initial stages, these decisionmakers responded to technological improvements by narrowing regulation to cover only those portions of industry that remained natural monopolies and deregulating those portions that became open to competition. Eventually, Congress began regulating individual network components rather than services, mandating that incumbent local telephone companies provide unbundled access to any network element. As these elements became open to competition, the courts prompted the Federal Communications Commission to release almost the entire network from unbundling obligations. The advent of the Obama Administration, the recent financial crisis, and the persistence of regulatory intervention in Europe has prompted a debate over whether the U.S. should begin to reregulate. This article reviews how regulation has forced the consumers and providers to bear the costs associated with rate regulation, prevented them from benefitting from the efficiencies associated with vertical integration, have forced them to bear the implementation costs of unbundling, and adversely affected incentives to invest in new network capacity. More recent arguments in favor of using unbundling as a way to help new entrants climb the ladder of investment have proven difficult to administer and empirically unsubstantiated. As a matter of comparative second-best analysis, the decision should be based on the tradeoff between short-run static efficiency losses and long-run dynamic efficiency gains and institutional considerations, such as the greater administrability of structural relief, the benefits of decentralized decisionmaking, the distortions caused by regulatory lag, and biases in governmental decisionmaking processes, which generally favor deregulation. Moreover, the increasing viability of competition heightens the importance of investment incentives and makes the costs of regulatory intervention harder to justify.
  • Authors: Peter Decherney , Nathan Ensmenger , Christopher Yoo Citation: [No abstract on file]
  • Author: Christopher Yoo Citation: [No abstract on file]
  • Author: Christopher Yoo Citation: The Internet unquestionably represents one of the most important technological developments in recent history. It has revolutionized the way people communicate with one another and obtain information and created an unimaginable variety of commercial and leisure activities. Interestingly, many members of the engineering community often observe that the current network is ill-suited to handle the demands that end users are placing on it. Indeed, engineering researchers often describe the network as ossified and impervious to significant architectural change. As a result, both the U.S. and the European Commission are sponsoring clean slate projects to study how the Internet might be designed differently if it were designed from scratch today. This Essay explores emerging trends that are transforming the way end users are using the Internet and examine their implications both for network architecture and public policy. These trends include Internet protocol video, wireless broadband, cloud computing programmable networking, and pervasive computing and sensor networks. It discusses how these changes in the way people are using the network may require the network to evolve in new directions.
  • Author: Christopher Yoo Citation: [No abstract on file]
  • Author: Christopher Yoo Citation: [No abstract on file]
  • Authors: Tim Wu , Christopher Yoo Citation: [No abstract on file]
  • Author: Christopher Yoo Citation: [No abstract on file]
  • Authors: Daniel Spulber , Christopher Yoo Citation: [No abstract on file]
  • Author: Christopher Yoo Citation: Network providers are experimenting with a variety of new business arrangements. Some are offering specialized services the guarantee higher levels of quality of service those willing to pay for it. Others are entering into strategic partnerships that allocate more bandwidth to certain sources. Interestingly, a management literature exists suggesting that both developments may simply reflect the ways that the nature of competition and innovation can be expected as markets mature. The real question is not if the nature of competition and innovation will change, but rather when and how. This theory also suggests that policymakers should be careful not to lock the Internet into any particular architecture or to reflexively regard deviations from the status quo as inherently anticompetitive. Instead, they should focus on creating regulatory structures that preserve industry participants freedom to tussle with new solutions and to adapt to changing market conditions. Any other approach risks precluding the industry from following its natural evolutionary path.
  • Authors: Fabrizio Marrella , Christopher Yoo Citation: [No abstract on file]
  • Author: Christopher Yoo Citation: [No abstract on file]
  • Authors: Daniel Spulber , Christopher Yoo Citation: [No abstract on file]
  • Author: Christopher Yoo Citation: [No abstract on file]
  • Author: Christopher Yoo Citation: [No abstract on file]
  • Author: Christopher Yoo Citation: [No abstract on file]
  • Author: Christopher Yoo Citation: [No abstract on file]
  • Author: Christopher Yoo Citation: On April 18-19, 2008, the University of Pennsylvania Law School hosted a landmark conference on The Enduring Lessons of the Breakup of AT&T: A Twenty-Five Year Retrospective. This conference was the first major event for Penns newly established Center for Technology, Innovation, and Competition, a research institute committed to promoting basic research into foundational frameworks that will shape the way policymakers think about technology-related issues in the future. The breakup of AT&T represents an ideal starting point for reexamining the major themes of telecommunications policy that have emerged over the past quarter century. The conference featured a keynote address by the Hon. Richard A. Posner of the U.S. Court of Appeals for the 7th Circuit. Panels addressed the following topics:- Looking Back at Divestiture: What Worked? What Didnt? (Roger Noll, Paul MacAvoy, Alfred Kahn, Joseph Weber)- Equal Access as the New Regulatory Paradigm: The Transition from Rate Regulation to Access Regulation (Glen Robinson, Tim Wu, Christopher Yoo, Gerald Faulhaber)- Structural Separation in Dynamic Markets: Lessons for the Internet, Lessons for Europe (Joseph Farrell, Eli Noam, Michael Riordan, Michael Salinger)- From the MFJ to Trinko: The Essential Facilities Doctrine and the Proper Provinces of Antitrust and Regulation (Daniel Spulber, Michael Katz, Timothy Brennan, Howard Shelanski)- Regulation by Consent Decree: Lessons for Microsoft and Beyond (Richard Epstein, Robert Crandall, Daniel Rubinfeld, Philip Weiser)- The Future of Intercarrier Compensation (Gerald Brock, Simon Wilkie, James Speta, Kevin Werbach)Selected papers were published in the Federal Communications Law Journal.
  • Author: Christopher Yoo Citation: This symposium contribution explores how technological convergence and the shift towards access regulation are fundamentally transforming the basic tools and goals of telecommunications regulation. However, policy makers have largely ignored the manner in which access requirements can forestall the buildout of alternative transmission technologies. Simply put, compelling access discourages investment in new networks by rescuing firms that need network services from having to invest in alternative sources of supply. In addition, forcing incumbent carriers to share their networks cuts those who would like to construct alternative network facilities off from their natural strategic partners. As a result, access remedies can have the perverse effect of cementing existing monopolies into place. In addition, policy makers have largely overlooked how technological convergence and the shift towards access regulation have undercut the justification for employing cost-based methodologies when setting rates. The more appropriate step at this point would be to adopt the more economically sound approach of basing rates on market prices. Finally, the advent of convergence is also exerting pressure on the tendency under current law to regulate each communications technology as a universe unto itself. The impending shift to packet-switched architectures promises to cause all networks to become substitutes for one another. Indeed, it is possible to envision a world in which different network technologies act as complements rather than substitutes for one another, with different packets arriving in the house through the most efficient transmission media, a transformation that would pose its own share of regulatory challenges.
  • Author: Christopher Yoo Citation: The well-known access-incentives tradeoff that lies at the heart of the standard economic analysis of copyright follows largely from the assumption that copyright turns authors into monopolists. If one instead analyzes copyright through a framework that allows for product differentiation and entry, the access-incentives tradeoff becomes less significant. By increasing producer appropriability and profit, increased copyright protection can stimulate entry of competitors producing similar works, which in turn results in lower prices, increased product variety, and increased access. This approach would also broaden set of available policy instruments, although disentangling the effects of one from another can be quite complicated.
  • Author: Christopher Yoo Citation: [No abstract on file]
  • Author: Christopher Yoo Citation: [No abstract on file]
  • Author: Christopher Yoo Citation: [No abstract on file]
  • Author: Christopher Yoo Citation: [No abstract on file]
  • Author: Christopher Yoo Citation: [No abstract on file]
  • Author: Christopher Yoo Citation: [No abstract on file]
  • Authors: Daniel Spulber , Christopher Yoo Citation: [No abstract on file]
  • Authors: John Conley , Christopher Yoo Citation: [No abstract on file]
  • Authors: Daniel Spulber , Christopher Yoo Citation: [No abstract on file]
  • Author: Christopher Yoo Citation: [No abstract on file]
  • Author: Christopher Yoo Citation: Television policy has been viewed historically as posing an irreconcilable conflict between static and dynamic efficiency. Static efficiency requires that the price for television programming be set at marginal cost, which in the case of television programming is essentially zero. Dynamic efficiency dictates that the price be set high enough to allow the program to generate sufficient revenue to cover its fixed costs. Truly optimal (i.e., first-best) pricing was regarded as impossible, with any pricing decision necessarily reducing to a tradeoff between these two considerations. In this Article, Professor Yoo combines the insights of public good economics and monopolistic competition theory to advance a new approach to the regulation of television that brings these two seemingly contradictory forces into alignment. He then explores this framework by using it to evaluate one of the most longstanding and central commitments of U.S. television policy—the promotion and preservation of free, local television—which he argues is better viewed as being comprised of four subcommitments. Application of this framework reveals that these subcommitments have actually had the effect of impeding rather than promoting free, local television. Abandonment of these subcommitments would likely cause the quantity, quality, and diversity of television programming to increase. The analysis also shows how attempts to foster free, local television have induced secondary distortions in markets for other spectrum-based communications and has slowed the deployment of new technologies, such as third-generation wireless devices.