b r o a d b a n d
connect v ty
compet t on
pol cy
ftc staff report
June 2007
i
Federal Trade Commission
DEBORAH PLATT MAJORAS Chairman
PAMELA JONES HARBOUR Commissioner
JON LEIBOWITZ Commissioner
WILLIAM E. KOVACIC Commissioner
J. THOMAS ROSCH Commissioner
Brian Huseman Chief of Staff
Charles H. Schneider Executive Director
Jeffrey Schmidt Director, Bureau of Competition
Lydia B. Parnes Director, Bureau of Consumer Protection
Michael A. Salinger Director, Bureau of Economics
Maureen K. Ohlhausen Director, Office of Policy Planning
Jeanne Bumpus Director, Office of Congressional Relations
Randolph W. Tritell Director, Office of International Affairs
Nancy Ness Judy Director, Office of Public Affairs
William Blumenthal General Counsel
Donald S. Clark Secretary of the Commission
Report Drafters and Contributors:
Gregory P. Luib, Assistant Director, Office of Policy Planning
Daniel J. Gilman, Attorney Advisor, Office of Policy Planning
Christopher M. Grengs, Attorney Advisor, Office of Policy Planning
James F. Mongoven, Deputy Assistant Director, Bureau of Competition
Armando Irizarry, Bureau of Competition
Mary Beth Richards, Deputy Director, Bureau of Consumer Protection
Elizabeth A. Hone, Assistant Director, Bureau of Consumer Protection
Robert Schoshinski, Bureau of Consumer Protection
Denis A. Breen, Assistant Director, Bureau of Economics
Patrick J. DeGraba, Bureau of Economics
Rachel Miller Dawson, Assistant General Counsel for Legal Counsel, Office of General Counsel
Lisa M. Harrison, Office of General Counsel
Robert B. Mahini, Office of General Counsel
Kim Vandecar, Congressional Specialist, Office of Congressional Relations
Katherine Rosenberg, Office of Congressional Relations
Yael Weinman, Office of International Affairs
Inquiries concerning this Report should be directed to:
Maureen K. Ohlhausen, Director, Office of Policy Planning
(202) 326-2632 or [email protected]
This Report represents the views of the FTC staff and does not necessarily represent the views of
the Commission or any individual Commissioner. The Commission, however, has voted to
authorize the staff to issue this Report.
ii
TABLE OF CONTENTS
INTRODUCTION AND EXECUTIVE SUMMARY.................................................................................... 1
I: THE INTERNET: HISTORICAL AND TECHNICAL BACKGROUND............................................. 13
A. H
ISTORICAL DEVELOPMENT.......................................................................................................... 13
B. MAJOR INTERNET COMPONENTS................................................................................................... 23
C. NETWORK MANAGEMENT, DATA PRIORITIZATION, AND OTHER FORMS OF DATA DISCRIMINATION 28
II: LEGAL AND REGULATORY BACKGROUND AND DEVELOPMENTS....................................... 37
A. FTC J
URISDICTION UNDER THE FTC ACT..................................................................................... 38
B. FCC J
URISDICTION UNDER THE COMMUNICATIONS ACT.............................................................. 42
C. R
EGULATORY AND JUDICIAL CLARIFICATION............................................................................... 43
III: ARGUMENTS IN FAVOR OF AND AGAINST NETWORK NEUTRALITY REGULATION..... 51
A. A
RGUMENTS IN FAVOR OF NETWORK NEUTRALITY REGULATION................................................ 52
B. ARGUMENTS AGAINST NETWORK NEUTRALITY REGULATION...................................................... 60
IV: DISCRIMINATION, BLOCKAGE, AND VERTICAL INTEGRATION.......................................... 70
A. L
AST-MILE ACCESS CONCERNS CONTINGENT ON MARKET POWER.............................................. 72
B. POTENTIAL PROBLEMS INDEPENDENT OF LAST-MILE MARKET POWER........................................ 76
C. POTENTIAL BENEFITS OF VERTICAL INTEGRATION....................................................................... 80
D. B
RIEF SUMMARY AND REMAINING QUESTIONS............................................................................. 82
V: DATA PRIORITIZATION........................................................................................................................ 83
A. W
HY PRIORITIZE DATA?................................................................................................................ 84
B. P
RIORITIZATION VERSUS CAPACITY EXPANSION........................................................................... 86
C. TYPES AND USES OF DATA PRIORITIZATION.................................................................................. 88
D. C
ONCLUSION.................................................................................................................................. 96
VI: THE CURRENT AND FUTURE STATE OF BROADBAND COMPETITION............................... 98
A. H
ISTORICAL BACKGROUND: DIAL-UP SERVICE............................................................................. 98
B. V
IEWS ON THE STATE OF BROADBAND COMPETITION................................................................... 99
C. M
UNICIPAL PROVISION OF WIRELESS INTERNET ACCESS.............................................................. 106
D. FEDERAL SPECTRUM POLICIES...................................................................................................... 109
E. I
NTERNATIONAL COMPARISONS.................................................................................................... 113
VII: ANTITRUST ANALYSIS OF POTENTIAL BROADBAND PROVIDER CONDUCT.................. 120
A. G
ENERAL PRINCIPLES UNDERLYING THE ANTITRUST LAWS......................................................... 120
B. P
OTENTIAL ANTITRUST THEORIES................................................................................................. 121
iii
VIII: CONSUMER PROTECTION ISSUES................................................................................................ 129
A.
AN OVERVIEW OF SECTION 5 OF THE FTC ACT..................................................................................................... 129
B. APPLICABILITY OF CONSUMER PROTECTION LAWS TO BROADBAND INTERNET ACCESS SERVICES.. 130
C. ADDITIONAL MEASURES TO PROTECT CONSUMERS........................................................................................... 136
IX: PROPOSALS REGARDING BROADBAND CONNECTIVITY.........................................................138
A. E
XISTING AGENCY OVERSIGHT..................................................................................................... 138
B. FCC POLICY STATEMENT AND MERGER CONDITIONS.................................................................. 141
C. L
EGISLATIVE PROPOSALS.............................................................................................................. 145
D. OTHER PROPOSALS RELATING TO BROADBAND CONNECTIVITY................................................... 147
X: SUGGESTED GUIDING PRINCIPLES.................................................................................................. 155
A. C
OMPETITION IN BROADBAND INTERNET ACCESS SERVICES........................................................ 155
B. GROUNDS FOR PROCEEDING WITH CAUTION................................................................................. 157
C. C
ONTINUED AGENCY OVERSIGHT................................................................................................. 161
APPENDIX 1 –
BROADBAND CONNECTIVITY COMPETITION POLICY WORKSHOP PARTICPANTS 163
APPENDIX 2 – GLOSSARY OF FREQUENTLY USED ACRONYMS................................................... 165
1
INTRODUCTION AND EXECUTIVE SUMMARY
Background
The Internet
1
has profoundly impacted numerous aspects of daily life for many
people in the United States and is increasingly vital to the American economy. In
response to recent debate relating to Internet access issues, Federal Trade Commission
(“FTC” or “Commission”) Chairman Deborah Platt Majoras announced the formation of
the Internet Access Task Force (“Task Force”) in August 2006 and invited interested
parties to meet with the Task Force to discuss issues relating to Internet access generally
and net neutrality
2
in particular.
3
The Task Force held a two-day public workshop on
broadband connectivity competition policy in February 2007 (“Workshop”) to bring
together consumer advocates and experts from business, government, academia, and the
technology sector to explore competition and consumer protection issues relating to
broadband Internet access.
4
The purpose of this Report is to summarize the Task Force’s
learning on broadband Internet connectivity in general and network neutrality in
particular, as developed from the Workshop, meetings between the Task Force and
various interested parties, and the FTC staff’s independent research.
1
As discussed in more detail in Chapter I of this Report, the term “Internet” is commonly used to refer to
the decentralized, interconnected network of computer networks that allows computers to communicate
with each other. Individual networks are owned and administered by a variety of organizations, such as
private companies, universities, research labs, government agencies, and municipalities.
2
The terms “net neutrality” and “network neutrality” have been used to identify various policy concerns
and prescriptions raised by diverse parties to the larger social discussion of broadband Internet
connectivity. Typically, such terms are identified with positions that recommend, at least, some legal or
regulatory restrictions on broadband Internet access services that include non-discrimination requirements
above and beyond any that may be implied by existing antitrust law or Federal Communications
Commission (“FCC”) regulations. Particular concerns and positions are explored in some detail throughout
the Report, but the terms “net neutrality” and “network neutrality” are used here, interchangeably, to refer
to this larger family of views. Unless otherwise clarified, our terminological choice is not meant to endorse
any particular policy position.
3
See Deborah Platt Majoras, Chairman, FTC, Luncheon Address, The Progress & Freedom Foundation’s
Aspen Summit, The Federal Trade Commission in the Online World: Promoting Competition and
Protecting Consumers (Aug. 21, 2006), available at
http://ftc.gov/speeches/majoras/060821pffaspenfinal.pdf
.
4
The agenda, transcript, public comments, and other information relating to the Workshop are available on
the FTC’s Web site at http://www.ftc.gov/opp/workshops/broadband/index.shtm
. In addition, Appendix 1
to this Report provides the identity and affiliation of the Workshop participants.
Throughout this Report, citations to “Public Comments” refer to comments submitted to the FTC
in response to its request for public comments on the topics addressed at the Workshop. In addition,
citations to “Tr.” refer to the Workshop transcript, which is comprised of two volumes. Volume I
corresponds to the proceedings on February 13, 2007; Volume II corresponds to the proceedings on
February 14, 2007. Speakers are identified by last name. Finally, citations to “Participant Presentations”
refer to presentations, including slide presentations and commentary, provided by Workshop participants.
2
Originally, the Internet developed out of efforts by researchers at American
universities and the U.S. Department of Defense Research Projects Agency (“DARPA”)
5
in the 1960s and 1970s to create and test interconnected computer networks that would
communicate via data packet switching rather than traditional circuits. Today, the
Internet – which enables applications such as e-mail and browsers that search the World
Wide Web (the “Web”) – connects many millions of end users (and more than one
hundred million Web sites worldwide) to content, applications, and each other. End users
include the initial government and academic centers, corporate entities across all sectors
of the economy, and individuals and associations.
Individual end users (and networks of end users) arrange for Internet access via a
“last mile” connection to an Internet service provider (“ISP”),
6
which provides, in turn,
routing and connections from the ISP’s own network to the Internet. Content and
applications providers offer their products and services to end users via network
operators, which enable connectivity and transport into the middle, or “core,” of the
Internet. Before the turn of the century, most computer users connected to the Internet
using “narrowband,” dial-up telephone connections and modems to transmit data over the
telephone system’s traditional copper wirelines. Much faster “broadband” connections
recently have been deployed using various technologies, including coaxial cable
wirelines, upgraded copper digital subscriber lines (“DSL”), and to a lesser extent fiber-
optic wirelines, wireless, satellite, and broadband over powerlines (“BPL”).
Traditionally, data traffic has traversed the Internet on a “first-in-first-out” and
“best-efforts” basis. This protocol for data transmission was established principally as a
result of DARPA’s original priority, which was to develop an effective technique for
communications among existing interconnected networks, and which placed network
survivability – or the potential for robust network operation in the face of disruption or
infrastructure destruction – as the top goal in designing the overall architecture of this
network of networks. Since the Internet’s earliest days, however, computer scientists
have recognized that network resources are scarce and that traffic congestion can lead to
reduced performance. Although different data transmission protocols and the viability of
usage-based pricing mechanisms were explored throughout the 1980s and 1990s, the
debate over broadband connectivity policy did not reach critical mass until recently.
Technical, business, legal, and regulatory developments all appear to have contributed to
the acceleration of the discussion.
Regulatory jurisdiction over broadband services generally is subject to the shared
jurisdiction of the FCC, the FTC, and the Department of Justice (“DOJ”).
7
FCC
jurisdiction comes chiefly from the Communications Act of 1934, as amended
(“Communications Act”).
8
FTC jurisdiction over broadband arises chiefly under its
5
Appendix 2 to this Report provides a glossary of acronyms that are frequently used herein.
6
In this Report, we also refer to broadband ISPs as “broadband providers” and “access providers.”
7
See infra Chapters II and IX.A for discussion of various jurisdictional issues.
8
47 U.S.C. §§ 151 et seq.
3
statutory mandate to prevent “unfair methods of competition” and “unfair or deceptive
acts or practices in or affecting commerce” under the FTC Act.
9
The FTC’s authority to
enforce the federal antitrust laws generally is shared with DOJ’s Antitrust Division. The
FCC, FTC, and DOJ have exercised their existing authority in various ways. All three
agencies have scrutinized proposed mergers in Internet-related markets and have
negotiated significant conditions on certain mergers allowed to go forward.
10
In addition,
the FTC has enforced the consumer protection laws, bringing a variety of cases against
Internet service providers that have engaged in allegedly deceptive marketing and billing
practices.
11
Certain judicial and regulatory decisions in recent years have clarified the scope
of broadband regulation in two fundamental regards. First, since about 2000, the FCC
has undertaken a substantial and systematic deregulation of broadband services and
facilities, concluding that cable, wireline, powerline, and wireless broadband Internet
access services are “information services” that are not subject to common carrier
requirements.
12
The first of these decisions was sustained by the Supreme Court in
National Cable & Telecommunications Association v. Brand X Internet Services.
13
Second, these decisions have served to reinforce and expand FTC jurisdiction
over broadband Internet access services. That jurisdiction had once been regarded as
limited to the extent that the FTC’s general enforcement authority under the FTC Act did
not extend to entities that were “common carriers” under the Communications Act. The
regulatory and judicial decisions at issue, however, confirmed that the larger categories of
broadband Internet access services, as information services, are not exempt from FTC
enforcement of the FTC Act.
In recent years, changes in both user demand and technology have prompted some
broadband providers openly to consider prioritizing certain data traffic to improve
network management and provide premium services. The demand for bandwidth has
increased dramatically, as a growing number of users seek access to increasingly data-
rich Internet content, such as streaming video, which often requires considerable
bandwidth or has particular quality-of-service requirements. That demand has prompted
9
15 U.S.C. §§ 41 et seq.
10
See, e.g., Am. Online, Inc. & Time Warner, Inc., FTC Dkt. No. C-3989 (Dec. 17, 2000) (complaint),
available at http://www.ftc.gov/os/2000/12/aolcomplaint.pdf. See infra Chapters II and IX for discussion
of FCC, FTC, and DOJ scrutiny of mergers in the area of broadband Internet access.
11
See, e.g., Am. Online, Inc. & CompuServe Interactive Servs., Inc., FTC Dkt. No. C-4105 (Jan. 28, 2004)
(decision and order), available at http://www.ftc.gov/os/caselist/0023000/040203aolcsdo.pdf; Juno Online
Servs., Inc., FTC Dkt. No. C-4016 (June 29, 2001) (decision and order), available at
http://www.ftc.gov/os/2001/06/junodo.pdf
.
12
Particular rulemaking and other administrative decisions along these lines are discussed in more detail in
Chapters II and IX, infra.
13
545 U.S. 967 (2005).
4
concern about present and future congestion and about the need for further infrastructure
investment and development. At the same time, technological developments have made
feasible differentiation in delivery of data of various types, or from various sources,
based on payment to or affiliation with a network operator.
In response, various interested parties, including some content and applications
providers and commentators, have expressed concern about network operators’ use of
these technologies in an environment that is not subject to common carrier regulations.
Some of these providers and commentators, therefore, have proposed that the
transmission of data on the Internet be subject to some type of “net neutrality” regulation
that forbids or places restraints on some types of data or price discrimination by network
operators. Opponents of net neutrality regulation assert that it is not just unnecessary, but
potentially harmful, and that allowing network operators to innovate freely across
technical and business contexts, and to differentiate their networks, will lead to enhanced
service offerings for both end users and content and applications providers.
Before turning to the policy discussion that follows, it is worth clarifying that this
Report reflects the views of the staff of an agency that enforces the federal antitrust and
consumer protection laws. The statutory mission of the FTC is to protect both
competition and consumers by safeguarding and encouraging the proper operation of the
free market. In carrying out that mission, the FTC primarily is focused on maximizing
consumer welfare, as that term is defined in an economic sense in modern antitrust and
consumer protection jurisprudence. We recognize that preserving the diversity of views
expressed on the Internet is one of the animating principles of many of the most ardent
proponents of network neutrality. In this Report, however, we do not attempt to balance
consumer welfare (as we use it, in the economic sense) and free expression.
14
Instead,
the Report focuses on the consumer welfare implications of enacting some form of net
neutrality regulation.
Further, although the goal of increasing competition in broadband Internet access
is fundamental to the FTC staff’s interest and may be widely shared, how best to achieve
that goal is a point of sharp disagreement. What the FTC can offer in this debate is an
explanation of which behavior the antitrust and consumer protection laws already
proscribe and a framework for analyzing which conduct may foster or impede
competition in particular circumstances.
The Report is organized as follows. Chapter I provides technical information on
the functioning of the Internet, and Chapter II provides background information on the
14
See, e.g., Mercatus Center, Public Comment 27, at 10 (“If the desired outcome is that anyone willing to
pay the monthly price for Internet access can communicate with others at some minimum speed, then a
policy that promotes ‘neutral’ treatment of everyone on the network may be appropriate. But if the desired
outcome is to have as many people as possible connected to the Internet so they can speak if they so
choose, then a different policy, aimed at reducing the consumer’s total cost of Internet access as well as
usage, may be most effective, even if it does not mandate ‘neutrality.’”); Feld, Tr. II at 75 (“It is a question
about balancing. . . . I can say that something does introduce a certain amount of economic inefficiency
and it is still extraordinarily valuable for the contribution that it gives to us as a society, as a democracy . . .
. I would argue that is something we should be willing to consider.”).
5
legal and regulatory developments that have fueled the debate over net neutrality
regulation. The purpose of these Chapters is to inform the subsequent policy discussion.
Chapter III identifies and briefly describes the various arguments for and against net
neutrality regulation that have been put forth to date. Chapter IV analyzes potential
conduct by ISPs and other network operators, including vertical integration into content
and applications and discrimination against non-affiliated providers of content and
applications. Chapter V analyzes the potential use of data prioritization technologies by
network operators. Chapter VI considers the current and future state of competition in
the area of broadband Internet access. Chapter VII explores the application of the
antitrust laws to certain potential conduct and business arrangements involving ISPs and
other network operators. Chapter VIII addresses consumer protection issues relating to
broadband Internet access. Chapter IX identifies regulatory, legislative, and other
proposals for broadband Internet access that have been put forth to date. Finally, Chapter
X identifies guiding principles for policy makers to consider prior to enacting any new
laws or regulations in this area.
The Contours of the Debate
Proponents of network neutrality regulation include, among others, some content
and applications providers, non-facilities-based ISPs, and various commentators. They
generally argue that “non-neutral” practices will cause significant and wide-ranging
harms and that the existing jurisdiction of the FCC, FTC, and DOJ, coupled with
Congressional oversight, are insufficient to prevent or remedy those harms. Proponents
suggest that, with deregulation of broadband services, providers of certain broadband
Internet services have the legal ability, as well as economic incentives, to act as
gatekeepers of content and applications on their networks.
Principally, these advocates express concern about the following issues: (1)
blockage, degradation, and prioritization of content and applications; (2) vertical
integration by ISPs and other network operators into content and applications; (3) effects
on innovation at the “edges” of the network (that is, by content and applications
providers); (4) lack of competition in “last-mile” broadband Internet access markets; (5)
remaining legal and regulatory uncertainty in the area of Internet access; and (6) the
diminution of political and other expression on the Internet. Not all proponents of net
neutrality regulation oppose all forms of prioritization, however. For example, some
believe that prioritization should be permitted if access to the priority service is open to
all content and applications providers on equal terms; that is, without regard to the
identity of the content or application provider.
Opponents of network neutrality regulation include, among others, some
facilities-based wireline and wireless network operators and other commentators. They
maintain that net neutrality regulation will impede investment in the facilities necessary
to upgrade Internet access and may hamper technical innovation. They also argue that
the sorts of blocking conduct described by net neutrality proponents are mainly
hypothetical thus far and are unlikely to be widespread and thus are insufficient to justify
a new, ex ante regulatory regime.
6
Principally, opponents of net neutrality regulation argue that: (1) neutrality
regulations would set in stone the status quo, precluding further technical and business-
model innovation; (2) effective network management practices require some data
prioritization and may require certain content, applications, or attached devices to be
blocked altogether; (3) new content and applications are likely to require prioritization
and other forms of network intelligence; (4) allowing network operators to innovate
freely and differentiate their networks permits competition that is likely to promote
enhanced service offerings; (5) prohibiting price differentiation would reduce incentives
for network investment generally and may prevent pricing and service models more
advantageous to marginal consumers; (6) vertical integration by network operators into
content and applications and certain bundling practices may benefit consumers; and (7)
there is insufficient evidence of either the likelihood or severity of potential harms to
justify an entirely new regulatory regime, especially given that competition is robust and
intensifying and the market generally is characterized by rapid technological change.
Competing Concerns about Integration and Differentiation
Proponents of net neutrality regulation have raised various concerns about the
effects of data or price differentiation in broadband markets.
15
Certain of these concerns
are tied to vertical integration (broadly construed), as broadband Internet access providers
have begun to offer online content and applications in addition to their primary access
services. Other concerns are independent of such integration.
In particular, proponents are concerned that vertical integration by Internet access
providers into content and applications markets could prompt them to block, degrade, or
charge higher prices to competing content or applications. New information
technologies, such as deep packet inspection, may allow network operators to identify the
source and content of much of the data traffic they handle. Hence, a broadband provider
with significant market power in a given access market, which has an interest in content
or applications generally, could have an incentive to block or degrade competing content
or applications.
Independent of market power considerations, some net neutrality proponents have
raised concerns about the so-called “terminating access monopoly problem,” which could
result from broadband Internet access providers charging content or applications
providers terminating fees for delivery to end users over the last mile. Some proponents
also have expressed concern that if broadband providers are allowed to sign exclusive
deals with content or applications providers, end users may be unable to access much of
the content they desire, thus “balkanizing” the Internet.
On the other hand, because vertical integration may offer efficiencies that are
procompetitive and pro-consumer, not all vertical integration is problematic. More
particularly, opponents of net neutrality regulation maintain that some degree of vertical
15
See infra Chapters IV and V for more detailed discussion of data differentiation and price differentiation,
respectively.
7
integration by Internet access providers into content and applications may facilitate
investment in infrastructure, investment in content or applications, optimization of fit
between content and delivery systems, and pricing benefits for consumers. They assert
that such vertical integration also may facilitate entry and thereby increase competition in
broadband Internet access markets. Further, the incentives of broadband providers may
cut both ways: for example, despite potentially having an incentive to favor affiliated
content and applications, access providers have argued that they have an interest in
providing access to a wide range of content and applications, which are essential
complements to the services they sell.
As is the case with data discrimination, it is impossible to determine in the
abstract whether allowing content and applications providers (or even end users) to pay
broadband providers for prioritized data transmission will be beneficial or harmful to
consumer welfare.
16
Such prioritization may provide benefits, such as increased
investment and innovation in networks and improved quality of certain content and
applications that require higher-quality data transmission, as net neutrality opponents
claim. Network neutrality proponents have raised concerns, however, regarding potential
adverse effects of data prioritization, including, among others: (1) a diminution in
innovation by content and applications providers – particularly those unable to pay for
prioritization; (2) the intentional or passive degradation of non-prioritized data delivery;
and (3) increased transaction costs resulting from negotiations between broadband
providers and content and applications providers over prioritization.
The balance between competing incentives on the part of broadband providers to
engage in, and the potential benefits and harms from, discrimination and differentiation in
the broadband area raise complex empirical questions and may call for substantial
additional study of the market generally, of local markets, or of particular transactions.
Again, further evidence of particular conduct would be useful for assessing both the
likelihood and severity of any potential harm from such conduct.
Present and Future Broadband Competition
17
Proponents and opponents of net neutrality regulation have fundamentally
different views on the present (and likely future) state of competition in the broadband
industry. Proponents argue either that a national market for broadband Internet access is,
in effect, a cable-telephone duopoly or that there are significant failures of competition in
many local markets. Opponents characterize the market as highly competitive.
Broadband Internet access generally is a relatively new industry characterized by high
levels of demand growth from consumers, high market shares held by incumbent cable
and telephone providers, and many new entrants trying to capture some share of the
market.
16
See infra Chapter V.
17
Broadband competition issues are discussed throughout this Report, particularly in Chapters VI and VII.
8
FTC staff did not conduct independent empirical research regarding competition
in local broadband Internet access markets for the purposes of this Report. We note that
opponents of net neutrality regulation have pointed to evidence on a national scale that
(1) access speeds are increasing, (2) prices (particularly speed-adjusted or quality-
adjusted prices) are falling, and (3) new entrants, including wireless and other
competitors, are poised to challenge the incumbent cable and telephone companies. We
note, too, that statistical research conducted by the FCC has tended to confirm these
general trends.
18
For example, broadband deployment and penetration have increased
dramatically since 2000. The FCC estimated that by 2006, broadband DSL service was
available to 79 percent of the households that were served by a telephone company, and
cable modem service was available to 93 percent of the households to which cable
companies could provide cable television service.
19
Jurisdiction and the Application of Antitrust Law
The competitive issues raised in the debate over network neutrality regulation are
not new to antitrust law, which is well-equipped to analyze potential conduct and
business arrangements involving broadband Internet access. The antitrust laws are
grounded in the principle that competition serves to protect consumer welfare. In
conducting an antitrust analysis, then, the ultimate issue would be whether broadband
providers engage in unilateral or joint conduct that is likely to harm competition and
consumers in a relevant market.
Many proponents of net neutrality regulation are concerned that broadband
Internet access suppliers have market power in the last-mile access market and that they
will leverage that power into adjacent content and applications markets in a way that will
harm competition in those markets and, ultimately, consumers. Such leveraging may
take the form of exclusive dealing arrangements, refusals to deal, vertical integration, or
certain unilateral conduct. All of these types of conduct can be anticompetitive and
harmful to consumers under certain conditions. They also, however, can be
procompetitive, capable of improving efficiency and consumer welfare, which involves,
among other things, the prices that consumers pay, the quality of goods and services
offered, and the choices that are available in the marketplace. Accordingly, such conduct
would be analyzed under the antitrust laws to determine the net effect of such conduct on
consumer welfare.
18
See, e.g., FCC, HIGH-SPEED SERVICES FOR INTERNET ACCESS: STATUS AS OF JUNE 30, 2006 (2007)
[hereinafter FCC,
HIGH-SPEED SERVICES], available at
http://hraunfoss.fcc.gov/edocs_public/attachmatch/DOC-270128A1.doc
. Although some have questioned
whether the methodology used in compiling this data allows the FCC to provide a reliable analysis of
competition in particular markets, the FCC data does provide an overall picture of the significant growth in
broadband penetration over the past few years.
19
See, e.g., id. at 2-4, 5 tbl.1, 6 tbl.2, 7 tbl.3, 19 tbl.14.
9
There nonetheless remains significant disagreement with respect to the adequacy
of existing agency oversight. Some proponents of net neutrality regulation have argued
that existing laws, regulations, and agency oversight are inadequate to safeguard
competition in broadband Internet access markets. Those opposed to net neutrality
regulation, however, have argued that current competition law is adequate, that careful
rule-of-reason application of the law is critical to the preservation of competition, and
that additional regulations likely would be over-intrusive and, on balance, a burden to
vibrant competition in broadband markets.
Consumer Protection Issues
Effective consumer protection in the broadband marketplace is essential to robust
competition in that market – regardless of the outcome of the current broadband
connectivity debate. The FTC has been active in enforcing relevant consumer protection
law, bringing a variety of cases against ISPs that have engaged in allegedly deceptive
marketing and billing practices. The Workshop highlighted various consumer protection
concerns. Several Workshop participants argued that such concerns were best addressed
under FTC jurisdiction, given the FTC’s statutory mandate, its interest and experience in
consumer protection issues generally, and its interest and experience in consumer
protection aspects of various Internet services in particular.
Internet access implicates two broad areas of consumer protection: (1) clear and
conspicuous disclosure of material terms of Internet access services; and (2) security and
privacy issues created by broadband Internet access services. Current federal consumer
protection law can address both sets of concerns, although consumer protection issues in
the broadband marketplace may present unique technical and jurisdictional challenges,
both to consumers and law enforcement agencies. Commentators within and without the
Workshop have suggested that federal law enforcement fruitfully could be augmented by
industry self-regulation and expanded federal guidance on pertinent issues.
Suggested Guiding Principles
The FTC’s Internet Access Task Force has conducted a broad examination of the
technical, legal, and economic issues underpinning the debate surrounding broadband
connectivity competition policy. Based on this examination, as well as our experience
with the operation of myriad markets throughout the economy, we identify guiding
principles that policy makers should consider in evaluating options in the area of
broadband Internet access.
20
We have provided an explanation of the conduct that the
antitrust and consumer protection laws already proscribe and a framework for analyzing
which conduct may foster or impede competition in particular circumstances. In
evaluating whether new proscriptions are necessary, we advise proceeding with caution
before enacting broad, ex ante restrictions in an unsettled, dynamic environment.
20
See infra Chapter X.
10
There is evidence that the broadband Internet access industry is moving in the
direction of more, not less, competition, including fast growth, declining prices for
higher-quality service, and the current market-leading technology (i.e., cable modem)
losing share to the more recently deregulated major alternative (i.e., DSL). We
nonetheless recognize that not every local broadband market in the United States may
enjoy vigorous competition.
21
This Report does not reflect a case-by-case analysis of the
state of competition in each of the localities that may represent relevant antitrust markets.
There also appears to be substantial agreement on the part of both proponents and
opponents of net neutrality regulation that greater competition in the area of broadband
Internet access would benefit consumers. Thus, to the extent that policy makers are not
content to wait for the market to increase competition, they should consider pursuing
various ways of increasing competition in the provision of broadband Internet access.
Based on what we have learned through our examination of broadband
connectivity issues and our experience with antitrust and consumer protection issues
more generally, we recommend that policy makers proceed with caution in evaluating
proposals to enact regulation in the area of broadband Internet access. The primary
reason for caution is simply that we do not know what the net effects of potential conduct
by broadband providers will be on all consumers, including, among other things, the
prices that consumers may pay for Internet access, the quality of Internet access and other
services that will be offered, and the choices of content and applications that may be
available to consumers in the marketplace.
With respect to data discrimination, broadband providers have conflicting
incentives relating to blockage of and discrimination against data from non-affiliated
providers of content and applications.
22
In the abstract, it is impossible to know which of
these incentives would prove stronger for each broadband provider. Further, even
assuming such discrimination were to take place, it is unknown whether the net effect on
consumer welfare would be adverse. Likewise, it is not possible to know in the abstract
whether allowing content and applications providers to pay broadband providers for
prioritized data transmission will be beneficial or harmful to consumers.
23
Several open questions that likely will be answered by either the operation of the
current marketplace or technological developments provide additional reasons for
caution. These questions include, among others: (1) How much demand will there be
from content and applications providers for data prioritization?; (2) Will effective data
prioritization, throughout the many networks comprising the Internet, be feasible?; (3)
Would allowing broadband providers to practice data prioritization necessarily result in
the degradation of non-prioritized data delivery?; (4) When will the capacity limitations
of the networks comprising the Internet result in unmanageable or unacceptable levels of
21
See infra Chapter VI.B.
22
See infra Chapter IV.
23
See infra Chapter V.
11
congestion?; and (5) If that point is reached, what will be the most efficient response
thereto: data prioritization, capacity increases, a combination of these, or some as yet
unknown technological innovation? The eventual answers to these questions may give
policy makers key information about the net effects on consumer welfare arising from the
conduct and business arrangements that network neutrality regulation would prohibit or
limit.
Policy makers also should carefully consider the potentially adverse and
unintended effects of regulation in the area of broadband Internet access before enacting
any such regulation. Industry-wide regulatory schemes – particularly those imposing
general, one-size-fits-all restraints on business conduct – may well have adverse effects
on consumer welfare, despite the good intentions of their proponents. Even if regulation
does not have adverse effects on consumer welfare in the short term, it may nonetheless
be welfare-reducing in the long term, particularly in terms of product and service
innovation. Further, such regulatory schemes inevitably will have unintended
consequences, some of which may not be known until far into the future. Once a
regulatory regime is in place, moreover, it may be difficult or impossible to undo its
effects.
Two aspects of the broadband Internet access industry heighten the concerns
raised by regulation generally. First, the broadband industry is relatively young and
dynamic, and, as noted above, there are indications that it is moving in the direction of
more competition. Second, to date we are unaware of any significant market failure or
demonstrated consumer harm from conduct by broadband providers. Policy makers
should be wary of enacting regulation solely to prevent prospective harm to consumer
welfare, particularly given the indeterminate effects that potential conduct by broadband
providers may have on such welfare.
The federal antitrust agencies, the FTC and the DOJ, and the FCC share
jurisdiction over broadband Internet access, with each playing an important role in
protecting competition and consumers in this area. Further, as a byproduct of the
ongoing debate over network neutrality regulation, the agencies have a heightened
awareness of the potential consumer harms from certain conduct by, and business
arrangements involving, broadband providers. Perhaps equally important, many
consumers are now aware of such issues. Consumers – particularly online consumers –
have a powerful collective voice. In the area of broadband Internet access, they have
revealed a strong preference for the current open access to Internet content and
applications.
The FTC has been involved in the Internet access area for over a decade and will
continue to be involved in the evolving area of broadband access. The FTC Act is
sufficiently flexible to allow the FTC to enforce the antitrust and consumer protection
laws in most industries, including those involving new and ever-changing technologies.
The fundamental principles of antitrust and consumer protection law and economics that
we have applied for years are as relevant to the broadband industry as they are to other
industries in our economy.
12
The FTC will continue to devote substantial resources to maintaining competition
and protecting consumers in the area of broadband Internet access, using a variety of
tools. The FTC will continue to enforce the antitrust and consumer protection laws in
evaluating conduct and business arrangements involving broadband access. Further, the
FTC’s Broadband Connectivity Competition Policy Workshop and this Report exemplify
some of the diverse resources the agency may bring to bear on Internet access issues, in
addition to specific law enforcement actions. The Workshop and Report reflect the
agency’s interest in and commitment to developing competition and consumer protection
policy. Finally, the agency will continue to expend considerable efforts at consumer
education, industry guidance, and competition advocacy in the important area of Internet
access.
13
I. THE INTERNET: HISTORICAL AND TECHNICAL BACKGROUND
The Internet is a decentralized network of computer networks that enables
millions of private and public computers around the world to communicate with each
other. This interconnection of multiple computer networks, which otherwise would
function only as a series of independent and isolated islands, gives rise to the term
“Internet” as we know it today.
24
This Chapter is organized as follows. Section A
summarizes the historical development of the Internet and describes how data is routed
over it; Section B discusses the relationship between “last-mile” Internet service
providers, Internet “backbone” networks, and content and applications providers; and
Section C explores the technical aspects of network management, data prioritization, and
other forms of data “discrimination.”
A. Historical Development
The Internet developed out of research efforts funded by the U.S. Department of
Defense Advanced Research Projects Agency in the 1960s and 1970s to create and test
interconnected computer networks.
25
The fundamental aim of computer scientists
working on this “ARPANET” was to develop an overall Internet architecture that could
connect and make use of existing computer networks that might, themselves, be different
24
The Federal Networking Council, a group of U.S. federal agency representatives involved in the early
development of federal networking, for example, adopted this definition of the term “Internet” in 1995:
“Internet” refers to the global information system that–
(i) is logically linked together by a globally unique address space based on the Internet
Protocol (IP) or its subsequent extensions/follow-ons;
(ii) is able to support communications using the Transmission Control Protocol/Internet
Protocol (TCP/IP) suite or its subsequent extensions/follow-ons, and/or other IP-
compatible protocols; and
(iii) provides, uses or makes accessible, either publicly or privately, high level services
layered on the communications and related infrastructure described herein.
U.S. Federal Networking Council, Resolution dated October 24, 1995, in Robert E. Kahn & Vinton G.
Cerf, What Is the Internet (and What Makes It Work) n.xv (1999), available at
http://www.cnri.reston.va.us/what_is_internet.html
.
The convention of writing “internet” in lower case letters typically refers to interconnected
networks generally, while writing “Internet” with an uppercase “I” is generally used to refer to the original
or current version of the Internet. D
OUGLAS E. COMER, THE INTERNET BOOK 60 (4th ed. 2007).
Sometimes, though, individual networks are also referred to as being alternative “Internets.” E.g.,
I
NTERNET2, ABOUT US (2007), available at http://www.internet2.edu/about.
25
See generally David D. Clark, The Design Philosophy of the DARPA Internet Protocols, COMPUTER
COMM. REV., Aug. 1988, at 106, available at http://nms.csail.mit.edu/6829-papers/darpa-internet.pdf;
BARRY M. LEINER ET AL., A BRIEF HISTORY OF THE INTERNET,
http://www.isoc.org/internet/history/brief.shtml
(last visited June 18, 2007); COMER, supra note 24, at 62.
14
both architecturally and technologically.
26
The secondary aims of the ARPANET project
were, in order of priority: (1) Internet communication must continue despite the loss of
networks or gateways between them; (2) the Internet architecture must support multiple
types of communications services; (3) the architecture must accommodate a variety of
networks; (4) it must permit distributed, decentralized management of its resources; (5)
the architecture must be cost-effective; (6) the architecture must permit attachment by
computer devices with a low level of effort; and (7) the resources used in the Internet
architecture must be accountable.
27
That is to say, ARPANET’s first priority was
network survivability in a potentially hostile environment, and its last priority was
providing a system for allocating charges for passing data packets from network to
network.
28
By the late 1960s, computer scientists were experimenting with non-linear
“packet-switched” techniques to enable computers to communicate with each other.
29
Using this method, computers disassemble information into variable-size pieces of data
called “packets” and forward them through a connecting medium to a recipient computer
that then reassembles them into their original form. Each packet is a stand-alone entity,
like an individual piece of postal mail, and contains source, destination, and reassembly
information. Unlike traditional circuit-switched telephone networks, packet-switched
networks do not require a dedicated line of communication to be allocated exclusively for
the duration of each communication. Instead, individual data packets comprising a larger
piece of information, such as an e-mail message, may be dispersed and sent across
26
Clark, supra note 25, at 106 (“The top level goal for the DARPA Internet Architecture was to develop an
effective technique for multiplexed utilization of existing interconnected networks.”).
27
Id. at 107.
28
Id. Besides survivability, “[t]here were also other concerns, such as implementation efficiency,
internetwork performance, but these were secondary considerations at first.” L
EINER ET AL., supra note 25.
David D. Clark, who served as chief Protocol Architect for TCP/IP from 1981-89, has noted that the
ARPANET’s original goals differ from what an architecture designed for commercial purposes might have
looked like:
This set of goals might seem to be nothing more than a checklist of all the
desirable network features. It is important to understand that these goals are in order of
importance, and an entirely different network architecture would result if the order were
changed. For example, since this network was designed to operate in a military context,
which implied the possibility of a hostile environment, survivability was put as a first
goal, and accountability as a last goal. During wartime, one is less concerned with
detailed accounting of resources used than with mustering whatever resources are
available and rapidly deploying them in an operational manner. While the architects of
the Internet were mindful of [resource] accountability, the problem received very little
attention during the early stages of the design, and is only now being considered. An
architecture primarily for commercial deployment would clearly place these goals at the
opposite end of the list.
Clark, supra note 25, at 107.
29
See generally LEINER ET AL., supra note 25.
15
multiple paths before reaching their destination and then being reassembled.
30
This
process is analogous to the way that the individual, numbered pages of a book might be
separated from each other, addressed to the same location, forwarded through different
post offices, and yet all still reach the same specified destination, where they could be
reassembled into their original form.
31
By the mid-1970s, computer scientists had developed several software
communications standards, or protocols, for connecting computers within the same
network. At about the same time, ARPANET scientists developed a protocol for
connecting different networks to each other, called the Transmission Control
Protocol/Internet Protocol (“TCP/IP”) software suite.
32
The TCP component of the suite
controls the disassembly and reassembly of data packets sent from a computer server,
where the data resides.
33
The IP component specifies the formatting and addressing
scheme for transmitting data between sender and recipient computers.
34
This approach requires that individual networks be connected together by gateway
interface devices, called switches or routers.
35
Thus, interconnected networks are, in
30
See generally JONATHAN E. NUECHTERLEIN & PHILIP J. WEISER, DIGITAL CROSSROADS: AMERICAN
TELECOMMUNICATIONS POLICY IN THE INTERNET AGE 39-45 (paperback ed., 2007) (comparing circuit-
switched and packet-switched networks).
31
See id. at 42.
32
Vinton G. Cerf & Robert E. Kahn, A Protocol for Packet Network Intercommunication, 22 IEEE
TRANSACTIONS ON COMM. 637 (1974), available at
http://www.cs.princeton.edu/courses/archive/fall06/cos561/papers/cerf74.pdf.
33
In the original paper describing the TCP/IP protocol, Cerf and Kahn explain:
Processes that want to communicate present messages to the TCP for transmission, and
TCP’s deliver incoming messages to the appropriate destination processes. We allow the
TCP to break up messages into segments because the destination may restrict the amount
of data that may arrive, because the local network may limit the maximum transmission
size, or because the TCP may need to share its resources among many processes
concurrently. . . .
From this sequence of arriving packets (generally from different HOSTS
[computers]), the TCP must be able to reconstruct and deliver messages to the proper
destination processes.
Id. at 640.
34
“Since the GATEWAY [(router)] must understand the address of the source and destination HOSTS, this
information must be available in a standard format in every packet which arrives at the GATEWAY. This
information is contained in an internetwork header prefixed to the packet by the source HOST.” Id. at 638.
“If the TCP is to determine for which process an arriving packet is intended, every packet must contain a
process header (distinct from the internetwork header) that completely identifies the destination process.”
Id. at 640.
35
See id. at 638.
16
effect, a series of routers connected by transmission links. Packets of data are passed
from one router to another, via the transmission links. Typically, each router has several
incoming transmission links through which packets arrive and several outgoing links
through which the router can send packets. When a packet arrives at an incoming link,
the router will use a software algorithm to determine the outgoing link through which the
packet should be routed. If that outgoing link is free, the packet is sent out immediately.
If the relevant outgoing link is busy transmitting other packets, however, the newly
arrived packet must wait. Usually, the packet will be temporarily held, or “buffered,” in
the router’s memory, waiting its turn until the relevant outgoing link is free. Thus,
buffering is a method of dealing with temporary surges in Internet traffic, which can be
variable or “bursty.” If too many packets are buffered during a period of congestion,
however, the router may have no choice but to reroute or drop altogether some of those
packets.
36
Because no transmission mechanism can be completely reliable, computer
scientists also developed methods of retransmitting data to deal with dropped or
otherwise incorrectly transmitted packets.
37
Two of the resulting features of this TCP/IP protocol are that it transmits data
between networks on a “first-in-first-out” and “best-efforts” basis.
38
Therefore, although
the resulting interconnected networks are generally able to transmit data successfully
36
See generally Edward W. Felten, Nuts and Bolts of Network Neutrality 1-2 (AEI-Brookings Joint Center,
Working Paper No. RP-06-23, 2006), available at http://www.aei-
brookings.org/publications/abstract.php?pid=1106. See also Jon M. Peha, The Benefits and Risks of
Mandating Network Neutrality and the Quest for a Balanced Policy, 34th Research Conference on
Communication, Information, & Internet Policy 5-6 (2006), available at
http://web.si.umich.edu/tprc/papers/2006/574/Peha_balanced_net_neutrality_policy.pdf
(describing the use
of algorithms to manage traffic flows across a network).
37
As Cerf and Kahn explained:
No transmission can be 100 percent reliable. We propose a timeout and positive
acknowledgement mechanism which will allow TCP’s to recover from packet losses
from one HOST to another. . . . [T]he inclusion of a HOST retransmission capability
makes it possible to recover from occasional network problems and allows a wide range
of HOST protocol strategies to be incorporated. We envision it will occasionally be
invoked to allow HOST accommodation to infrequent overdemands for limited buffer
resources, and otherwise not used much.
Cerf & Kahn, supra note 32, at 643.
38
See generally DAVID CLARK ET AL., NEW ARCH: FUTURE GENERATION INTERNET ARCHITECTURE: FINAL
TECHNICAL REPORT (2003), available at http://www.isi.edu/newarch/iDOCS/final.finalreport.pdf
(sponsored by DARPA Information Technology Office). “The original Internet provided a very simple and
minimally specified packet transfer service, sometimes called ‘best effort’. Crudely, what ‘best effort’
means is that the network makes no specific commitments about transfer characteristics, such as speed,
delays, jitter, or loss.” Id. at 7.
17
between senders and receivers using TCP/IP, congestion or other technical issues can
affect transmission and, as a result, no particular quality-of-service level is guaranteed.
39
Also, during the Internet’s early years, network architectures generally were based
on what has been called the “end-to-end argument.”
40
This argument states that computer
application functions typically cannot, and should not, be built into the routers and links
that make up a network’s middle or “core.” Instead, according to this argument, these
functions generally should be placed at the “edges” of the network at a sending or
receiving computer.
41
This argument also recognizes, however, that there might be
certain functions that can be placed only in the core of a network. Sometimes, this
argument is described as placing “intelligence” at or near the edges of the network, while
leaving the core’s routers and links mainly “dumb” to minimize the potential for
transmission and interoperability problems that might arise from placing additional
complexity into the middle of the network.
42
Throughout the 1970s and 1980s, the interconnection of computer networks using
TCP/IP continued to grow, spurred by uses such as e-mail.
43
In the mid-1980s, the
National Science Foundation (“NSF”) recognized that computer networks were having an
important impact on scientific research by facilitating communications between
researchers working in different locations. NSF and DARPA had been jointly funding a
network to connect computer science researchers (“CSNET”) since the late 1970s. In
1985, NSF announced a plan to connect one hundred universities to the Internet, in
addition to five already-existing supercomputer centers located around the country.
44
39
In the original paper describing the TCP/IP protocol, Cerf and Kahn recognized that because individual
networks have differing characteristics, “[t]he transmit time for this data is usually dependent upon internal
network parameters such as communications media data rates, buffering and signaling strategies, routing,
propagation delays, etc.” Cerf & Kahn, supra note 32, at 637. “The success or failure of a transmission
and its performance in each network is governed by different time delays in accepting, delivering, and
transporting the data.” Id. “TCP may need to share its resources among many processes concurrently.” Id.
at 640. Likewise, resources needed to buffer high volumes of incoming packets may also be “limited.” Id.
at 643. Thus, “[c]ongestion at the TCP level is flexibly handled owing to the robust retransmission and
duplicate detection strategy.” Id. at 645.
40
See, e.g., J.H. Saltzer et al., End-to-End Arguments in System Design, 2 ACM TRANSACTIONS ON
COMPUTER SYS. 277 (1984).
41
Id. at 277 (“The argument appeals to application requirements, and provides a rationale for moving
function upward in a layered system, closer to the application that uses that function.”).
42
See, e.g., Adam Thierer, Are “Dumb Pipe” Mandates Smart Public Policy? Vertical Integration, Net
Neutrality, and the Network Layers Model, in N
ET NEUTRALITY OR NET NEUTERING: SHOULD BROADBAND
INTERNET SERVICES BE REGULATED? 73, 79 (Thomas M. Lenard & Randolph J. May, eds., 2006).
43
LEINER ET AL., supra note 25 (“Thus, by 1985, Internet was already well established as a technology
supporting a broad community of researchers and developers, and was beginning to be used by other
communities for daily computer communications. Electronic mail was being used broadly across several
communities . . . .”).
44
COMER, supra note 24, at 72-76.
18
Recognizing the increasing importance of this interconnected network to U.S.
competitiveness in the sciences, however, NSF embarked on a new program with the goal
of extending Internet access to every science and engineering researcher in the country.
In 1988, NSF, in conjunction with a consortium of private-sector organizations,
completed a new long-distance, wide-area network, dubbed the “NSFNET” backbone.
Although private entities were now involved in extending the Internet, its design
still reflected ARPANET’s original goals. Although the original ARPANET was
decommissioned in 1990, its influence continued because TCP/IP had supplanted or
marginalized most other wide-area computer network protocols in existence at that
time,
45
and because its design, which provided for generality and flexibility, proved to be
durable in a number of contexts.
46
At the same time, its successful growth made clear
that these design priorities no longer matched the needs of users in certain situations,
particularly regarding accounting and resource management.
47
By 1992, the volume of traffic on NSFNET was approaching capacity, and NSF
realized it did not have the resources to keep pace with the increasing usage.
Consequently, the members of the consortium formed a private, non-profit organization
called Advanced Networks and Services
(“ANS”) to build a new backbone with
transmission lines having thirty times more capacity.
48
For the first time, a private
organization – not the government – principally owned the transmission lines and
computers of a backbone.
At the same time that privately owned networks started appearing, general
commercial activity on the NSFNET was still prohibited by an Acceptable Use Policy.
49
Thus, the expanding number of privately owned networks were effectively precluded
from exchanging commercial data traffic with each other using the NSFNET backbone.
Several commercial backbone operators circumvented this limitation in 1991, when they
established the Commercial Internet Exchange (“CIX”) to interconnect their own
backbones and exchange traffic directly. Recognizing that the Internet was outpacing its
ability to manage it, NSF decided in 1993 to leave the management of the backbone to
the competing commercial backbone operators. By 1995, this expanding network of
45
LEINER ET AL., supra note 25.
46
“In the context of its priorities, the Internet architecture has been very successful. The protocols are
widely used in the commercial and military environment, and have spawned a number of similar
architectures.” Clark, supra note 25, at 113.
47
Id.
48
COMER, supra note 24, at 75-76.
49
“On the NSFNET Backbone – the national-scale segment of the NSFNET – NSF enforced an
‘Acceptable Use Policy’ (AUP) which prohibited Backbone usage for purposes ‘not in support of Research
and Education.’” L
EINER ET AL., supra note 25.
19
commercial backbones had permanently replaced NSFNET, effectively privatizing the
Internet.
50
The growth of the Internet has been fueled in large part by the popularity of the
World Wide Web, created in 1989.
51
The number of Web sites on the Internet has grown
from one in 1989, to 18,000 in 1995, to fifty million in 2004, and to more than one
hundred million in 2006.
52
This incredible growth has been due to several factors,
including the realization by businesses that they could use the Internet for commercial
purposes, the decreasing cost and increasing power of personal computers, the
diminishing complexity of creating Web sites, and the expanding use of the Web for
personal and social purposes.
From its creation to its early commercialization, most computer users connected
to the Internet using a “narrowband” dial-up telephone connection and a special modem
to transmit data over the telephone system’s traditional copper wirelines, typically at a
rate of up to 56 kilobits per second (“Kbps”).
53
Much faster “broadband” connections
have subsequently been deployed using a variety of technologies.
54
These faster
technologies include coaxial cable wirelines, upgraded copper digital subscriber lines,
fiber-optic wirelines, and wireless, satellite, and broadband-over-powerline
technologies.
55
50
Michael Kende, The Digital Handshake: Connecting Internet Backbones 5 (FCC Office of Plans and
Policy, Working Paper No. 32, 2000), available at
http://www.fcc.gov/Bureaus/OPP/working_papers/oppwp32.pdf
.
51
See generally WORLD WIDE WEB CONSORTIUM, ABOUT THE WORLD WIDE WEB CONSORTIUM (W3C),
http://www.w3.org/Consortium (last visited June 22, 2007). Other popular uses of the Internet include: the
transfer of data files from one computer to another through a File Transfer Protocol (“FTP”); electronic
mail using Simple Mail Transfer Protocol (“SMTP”); and the use of TELetype NETwork (“TELNET”) to
use one computer to access a different computer at another location. See generally N
UECHTERLEIN &
WEISER, supra note 30, at 130. The Internet is often described as being comprised of multiple “layers,”
including: a physical layer consisting of the hardware infrastructure used to link computers to each other; a
logical layer of protocols, such as TCP/IP, that control the routing of data packets; an applications layer
consisting of the various programs and functions run by end users, such as a Web browser that enables
Web-based e-mail; and a content layer, such as a Web page or streaming video transmission. See id. at
118-21.
52
Marsha Walton, Web Reaches New Milestone: 100 Million Sites, CNN, Nov. 1, 2006,
http://www.cnn.com/2006/TECH/internet/11/01/100millionwebsites/index.html
(last visited June 15,
2007).
53
See NUECHTERLEIN & WEISER, supra note 30, at 134-35.
54
See id. at 134-47. Broadband has been defined by the FCC as services that provide transmission speeds
of 200 Kbps or higher in at least one direction. E.g., FCC,
HIGH-SPEED SERVICES, supra note 18, at 5 tbl.1.
Some critics, however, believe this definition is outdated. See, e.g., G. Sohn, Tr. I at 97 (“[I]t defines
broadband at a ridiculously slow speed, 200 kilobits per second.”).
55
See infra Chapter VI for a discussion of various broadband technologies.
20
The thousands of individual networks that make up the global Internet are owned
and administered by a variety of organizations, such as private companies, universities,
research labs, government agencies, and municipalities. Data packets may potentially
travel from their originating computer server across dozens of networks and through
dozens of routers before they reach a “last-mile” Internet service provider
56
and arrive at
a destination computer. This process of disassembly, transmission, and reassembly of
data packets may take as little as a fraction of a second for a simple piece of information
like a text e-mail traveling along a high-speed network, or it may take several hours for a
larger piece of information like a high-resolution video traveling a long distance along a
low-speed network.
57
This network of networks connects millions of individuals and organizations in a
way that allows almost instantaneous communications using computers, computerized
mobile devices, and other network attachments. End users interact with each other
through an ever-expanding universe of content and applications, such as: e-mail, instant
messaging, chat rooms, commercial Web sites for purchasing goods and services, social
networking sites, Web logs (“blogs”), music and video downloads, political forums,
voice over IP (“VoIP”) telephony services, streaming video applications, and multi-
player network video games. Internet users include individuals of virtually all ages and
walks of life, established businesses, fledgling entrepreneurs, non-profit groups,
academic and government institutions, and political organizations.
The TCP/IP protocol suite has been updated periodically since its introduction.
58
In recent years, however, some computer experts and other interested parties have
questioned the TCP/IP suite’s thirty-year-old first-in-first-out and best-efforts
characteristics.
59
Likewise, in light of the increasing deployment of applications that may
56
See infra Chapter I.B.1 for a discussion of last-mile ISPs.
57
See, e.g., NUECHTERLEIN & WEISER, supra note 30, at 136.
58
Kahn & Cerf, supra note 24 (“Refinement and extension of these protocols and many others associated
with them continues to this day by way of the Internet Engineering Task Force.”). See also I
NTERNET
ENGINEERING TASK FORCE, OVERVIEW OF THE IETF, http://www.ietf.org/overview.html (last visited May
16, 2007) (“The Internet Engineering Task Force (IETF) is a large open international community of
network designers, operators, vendors, and researchers concerned with the evolution of the Internet
architecture and the smooth operation of the Internet.”). IETF activities take place under the umbrella of
the Internet Society. See generally I
NTERNET SOCIETY, ABOUT THE INTERNET SOCIETY,
http://www.isoc.org/isoc
(last visited May 16, 2007) (The Internet Society “is the organization home for the
groups responsible for Internet infrastructure standards, including the Internet Engineering Task Force
(IETF) and the Internet Architecture Board (IAB).”).
59
E.g., David Farber & Michael Katz, Op-Ed., Hold Off On Net Neutrality, WASH. POST, Jan. 19, 2007, at
A19 (“The current Internet supports many popular and valuable services. But experts agree that an updated
Internet could offer a wide range of new and improved services, including better security against viruses,
worms, denial-of-service attacks and zombie computers; services that require high levels of reliability, such
as medical monitoring; and those that cannot tolerate network delays, such as voice and streaming video.
To provide these services, both the architecture of the Internet and the business models through which
services are delivered will probably have to change.”); Christopher S. Yoo, Network Neutrality and the
Economics of Congestion, 94 G
EO. L.J. 1847, 1863 & n.74 (2006) (noting the opinion of computer scientist
David Farber that the current Internet architecture is “getting old”).
21
operate better in a non-end-to-end environment, some have reexamined the end-to-end
design argument.
60
Some also have explored what a next generation Internet architecture
might look like, with the goal of managing the emerging tension between the Internet’s
open characteristics and more technologically demanding new applications.
61
In
addition, some observers have suggested that the Internet’s continued exponential growth
and the proliferation of resource-intensive content and applications like video file sharing
and the prospect of Internet Protocol television (“IPTV”) may outstrip the Internet’s
current capacity and cause it to become significantly congested or crash altogether.
62
The problem of network congestion, in particular, was recognized in the original
paper describing the TCP/IP suite and, although it received less attention than
ARPANET’s other original design priorities, computer scientists continued to be mindful
of the issue. Some, therefore, continued to explore different transmission protocols and
the viability of market-based pricing mechanisms through the 1980s and 1990s.
63
Further, as data-routing technologies have advanced in recent years, some network
operators have begun openly to consider using prioritization and other active
management practices to improve network management and provide certain premium
60
See, e.g., Marjory S. Blumenthal & David D. Clark, Rethinking the Design of the Internet: The End-to-
End Arguments vs. the Brave New World, 1 ACM TRANSACTIONS INTERNET TECH. 70 (2001) (concluding
that the open, general nature of the Internet historically associated with the end-to-end argument should be
preserved); R
OBERT E. KAHN, CORPORATION FOR NATIONAL RESEARCH INITIATIVES, INTERNET
EVOLUTION, GOVERNANCE AND THE DIGITAL OBJECT ARCHITECTURE: WORKSHOP ON SCORM
SEQUENCING AND NAVIGATION 8 (Feb. 23, 2005), available at
http://www.handle.net/presentations_plugfest9/PlugFest9_Plenary_kahn.ppt
(discussing whether the
Federal Network Council’s 1995 Internet definition, see supra note 24, should be updated to also include
services “integrated with” communications and related infrastructures); Press Release, Stanford Center for
Internet and Society, The Policy Implications of End-to-End (Dec. 1, 2000), available at
http://cyberlaw.stanford.edu/e2e
(workshop chaired by Professor Lawrence Lessig) (“In an increasing
range of contexts . . . e2e [(end-to-end)] is being questioned. Technologies that undermine e2e are
increasingly being deployed; other essential services, such as quality of service, are being developed in
ways that are inconsistent with e2e design.”).
61
E.g., CLARK ET AL., supra note 38, at 4 (“The goal of this project was to consider the following question:
if we could now design the Internet from scratch, knowing what we know today, how would we make the
basic design decisions?”).
62
E.g., DELOITTE TOUCHE TOHMATSU, TELECOMMUNICATIONS PREDICTIONS: TMT TRENDS 2007 (2007),
available at
http://www.deloitte.com/dtt/cda/doc/content/us_tmt_%202007_Telecom_Predictions_011606.pdf
.
According to this report, “[o]ne of the key possibilities for 2007 is that the Internet could be approaching its
capacity. The twin trends causing this are an explosion in demand, largely fueled by the growth in video
traffic and the lack of investment in new, functioning capacity.” Id. at 4.
63
E.g., Jeffrey K. MacKie-Mason & Hal R. Varian, Pricing the Internet, in PUBLIC ACCESS TO THE
INTERNET 269 (Brian Kahin & James Keller eds., 1995). According to MacKie-Mason and Varian:
“Congestion is likely to be a serious problem in the future Internet, and past proposals to control it are
unsatisfactory. We think an economic approach to allocating scarce Internet resources is warranted.” Id. at
284. “Our objective is not to raise profits above a normal rate of return by pricing backbone usage. Rather,
our goal is to find a pricing mechanism that will lead to the most efficient use of existing resources, and
will guide investment decisions appropriately.” Id.
22
services for a fee.
64
As a result, computer scientists, network operators, content and
applications providers, and other interested parties have increasingly debated the
significance of the Internet’s historical and current architecture and its implications for
the Internet’s future development.
65
64
See, e.g., At SBC, It’s All About “Scale and Scope, BUS. WK., Nov. 7, 2005,
http://www.businessweek.com/@@n34h*IUQu7KtOwgA/magazine/content/05_45/b3958092.htm
(interview with SBC Telecommunications’ CEO Edward Whitacre). According to Whitacre:
[T]here's going to have to be some mechanism for these people who use these pipes to
pay for the portion they're using. Why should they be allowed to use my pipes?
The Internet can't be free in that sense, because we and the cable companies
have made an investment and for a Google or Yahoo! or Vonage or anybody to expect to
use these pipes [for] free is nuts!
Id. See also Marguerite Reardon, Qwest CEO Supports Tiered Internet, ZDN
ET NEWS, Mar. 15, 2006,
http://articles.techrepublic.com.com/2100-1035_11-6050109.html. Qwest CEO Richard Notebaert has
stated his company would like to offer prioritized data transmission in the same way that express parcel
service may be purchased from Federal Express or UPS. In his view, “[i]t’s possible that (these companies)
would like to have differentiated service. . . . And if you have enough money, we can make a lot of things
happen.” Id. “Would this give some content providers an advantage over others? . . . Well, yeah. We’re
all trying to provide a bit of differentiation for a competitive edge. That’s what business is about.” Id.
65
For example, some of the Internet’s early designers have offered the following account:
One should not conclude that the Internet has now finished changing. The
Internet, although a network in name and geography, is a creature of the computer, not
the traditional network of the telephone or television industry. It will, indeed it must,
continue to change and evolve at the speed of the computer industry if it is to remain
relevant. It is now changing to provide such new services as real time transport, in order
to support, for example, audio and video streams. The availability of pervasive
networking (i.e., the Internet) along with powerful affordable computing and
communications in portable form (i.e., laptop computers, two-way pagers, PDAs, cellular
phones), is making possible a new paradigm of nomadic computing and communications.
This evolution will bring us new applications – Internet telephone and, slightly
further out, Internet television. It is evolving to permit more sophisticated forms of
pricing and cost recovery, a perhaps painful requirement in this commercial world. It is
changing to accommodate yet another generation of underlying network technologies
with different characteristics and requirements, from broadband residential access to
satellites. New modes of access and new forms of service will spawn new applications,
which in turn will drive further evolution of the net itself.
The most pressing question for the future of the Internet is not how the
technology will change, but how the process of change and evolution itself will be
managed. As this paper describes, the architecture of the Internet has always been driven
by a core group of designers, but the form of that group has changed as the number of
interested parties has grown. With the success of the Internet has come a proliferation of
stakeholders – stakeholders now with an economic as well as an intellectual investment
in the network. We now see, in the debates over control of the domain name space and
the form of the next generation IP addresses, a struggle to find the next social structure
that will guide the Internet in the future. The form of that structure will be harder to find,
given the large number of concerned stake-holders. At the same time, the industry
23
B. Major Internet Components
1. “Last-Mile” Internet Service Providers
“Last-mile”
66
Internet service providers offer the network connections that link
end users to the wider Internet.
67
By connecting its end-user customers to the many
networks comprising the Internet backbone, an ISP provides its customers access to the
end-user computers of any other ISP in the world connected to that backbone. Computer
users in the United States have had nearly ubiquitous last-mile access to dial-up Internet
connections of 56 to 280 Kbps since the late 1990s through telephone modems.
68
In
recent years, faster broadband connections have supplanted dial-up service for a rapidly
growing number of computer users who demand faster access to the increasingly
sophisticated and data-rich content and applications available on the Internet.
69
Principally, end users receive last-mile broadband Internet service through coaxial cable
wireline or upgraded copper digital subscriber wireline connections; other platforms,
such as fiber-optic wirelines, wireless, satellite, and broadband over powerlines, are also
increasingly available to connect end users to the Internet.
70
Basic residential service packages are typically available on a flat-rate basis to
home computer users.
71
ISPs may require that end users with more demanding needs,
like a medium or large business, purchase a business-class or other type of premium
struggles to find the economic rationale for the large investment needed for future
growth, for example to upgrade residential access to more suitable technology. If the
Internet stumbles, it will not be because we lack for technology, visions, or motivation. It
will be because we cannot set a direction and march collectively into the future.
L
EINER ET AL., supra note 25.
66
Networks that connect end users to the broader Internet are generally referred to as “last-mile” ISPs.
Networks that transmit data from a content or applications provider’s computer server(s) to the broader
Internet are sometimes referred to as “first-mile” ISPs.
67
Today, major last-mile wireline broadband ISPs include: AT&T, Comcast, Covad, Cox
Communications, and Verizon. Major wireless broadband ISPs include: AT&T, Sprint Nextel, T-Mobile,
and Verizon Wireless.
68
See NUECHTERLEIN & WEISER, supra note 30, at 134-35.
69
See id. at 134-47.
70
According to the most recent data available from the FCC, most broadband consumers access the Internet
today by cable modem or DSL. Of the 64.6 million high-speed lines in the United States as of June 30,
2006, 44.1% were cable modem, 36.4% DSL or other high-speed telephone line, 17.0% mobile wireless,
1.1% fiber-to-the-premise, 0.8% satellite, 0.5% fixed wireless, and 0.01% broadband over powerlines (and
other lines). FCC,
HIGH-SPEED SERVICES, supra note 18, at 5 tbl.1.
71
See generally Lehr, Tr. I at 37 (discussing “the market’s current attraction to . . . flat-rate pricing”);
Brenner, Tr. II at 96. See also, e.g., VERIZON, VERIZON HIGH SPEED INTERNET,
http://www22.verizon.com/content/consumerdsl/plans/all+plans/all+plans.htm
(last visited May 17, 2007).
24
service package.
72
In addition, end users can purchase for a premium fee access to a
specialized virtual private network (“VPN”) offering a defined quality-of-service level
over a reserved portion of an ISP’s network.
73
Last-mile broadband wireline architecture can take various forms. A last-mile
ISP can extend a fiber-optic wireline from a backbone connection to either a
neighborhood node, to the curb of a premise, or all the way to the end user’s premise. If
the fiber runs only to the node or curb, the ISP can then use a cable or DSL connection
for the remaining distance to the end user’s premise.
74
DSL wirelines provide a
dedicated amount of bandwidth to each end user, but can transmit data up to only about
three miles without the use of a repeater. Accordingly, transmission speeds can vary
depending on an end user’s distance from a repeater.
75
Cable wirelines offer shared
bandwidth among many customers. Thus, the transmission speed for an individual cable
modem customer can vary with the number of customers who are using the network
simultaneously.
76
Last-mile wireless networks using wireless fidelity (“Wi-Fi”) or worldwide
interoperability for microwave access (“Wi MAX”) technologies can be set up by
deploying multiple antennas on street lights, traffic signals, and buildings, so that
multiple wireless hotspots overlap each other to form a continuous “mesh” network of
wireless signals. An initial connection to a backbone network also must be made in order
to provide access to the wider Internet.
77
Several major telecommunications companies
also offer mobile wireless Internet services over their wireless phone networks.
78
Three
satellite providers offer broadband Internet service via satellite.
79
An end user must have
a computer or other device that is configured for wireless Internet use to access these
72
E.g., COMCAST, COMCAST WORKPLACE, http://www.comcast.com/wa-business/internet.html (last visited
May 14, 2007). Last-mile access for large enterprise customers, particularly those with multiple locations,
typically involves the use of dedicated, high-capacity facilities often referred to as special access or
dedicated access services. See In re Special Access Rates for Price Cap Local Exch. Carriers, 20 FCC Rcd
1994, 1995-96 (2005) (order and notice of proposed rulemaking) [hereinafter Special Access NPRM].
73
See, e.g., CHARLES B. GOLDFARB, ACCESS TO BROADBAND NETWORKS: CONGRESSIONAL RESEARCH
SERVICE REPORT TO CONGRESS 10-11 (2006), available at
http://www.ipmall.info/hosted_resources/crs/RL33496_060629.pdf
.
74
Id. at 9-11.
75
See generally FCC, FCC CONSUMER FACTS: BROADBAND ACCESS FOR CONSUMERS,
http://www.fcc.gov/cgb/consumerfacts/dsl2.html
(last visited June 22, 2007).
76
See generally id.
77
Wireless broadband providers that do not have their own facilities connecting their transmitters (e.g., cell
towers) to their switches typically purchase special access services from an incumbent local exchange
carrier or other provider of such services. See Special Access NPRM, 20 FCC Rcd at 1995-96.
78
GOLDFARB, supra note 73, at 10.
79
Id. at 10-11.
25
networks. In addition, there are now over forty deployments of broadband–over-
powerline technologies in the U.S., most of which are in trial stages.
80
Today’s last-mile networks generally are partitioned asymmetrically to provide
more bandwidth for data traveling from an ISP’s facilities to the end user’s computer
(“downstream”) than in the other direction (“upstream”). Typically, this is done because
end users request much more data from other server computers than they, themselves,
send out.
81
As a result, asymmetric architecture may constrain content and applications
that require the end user simultaneously to send and receive content at the same speeds
and volumes, such as two-way video transmissions.
82
Also, ISPs have the technical
capability to reserve portions of last-mile bandwidth for specific applications.
83
2. Internet Backbone Operators
Since 1995, when the expanding number of commercial backbone networks
permanently replaced NSFNET, commercial backbones have generally interconnected
with each other through voluntary, market-negotiated agreements.
84
To this day, there
are no general, industry-specific regulations that govern backbone interconnection in the
U.S.
85
Instead, commercial backbone operators independently make decisions about
interconnection by weighing the benefits and costs on a case-by-case basis.
86
Typically,
80
Id. at 11-12.
81
Id. at 4, 9.
82
Id. at 9.
83
For example, Verizon reserves one fiber of its downstream fiber-to-the-home service specifically for the
company’s video service, while a separate fiber carries all other incoming traffic. Id. at 10. AT&T
reserves 19 of 25 megabits of downstream end-user bandwidth specifically for the company’s video
service. Id. at 11. AT&T customers can purchase between 1.5 and 6 Mbps of the remaining downstream
bandwidth for Internet access and voice services. Id.
84
Observers have noted that:
Particularly in the Internet’s early days, many backbone providers exchanged traffic at
government-sponsored Network Access Points (NAPs)–the Internet’s equivalent to
public airports, where the routes of many different carriers converge. (When the
government privatized the Internet, it transferred control of these points to commercial
providers.) Internet backbone providers now increasingly rely on privately arranged
points of interconnection, largely because of congestion at the NAPs.
N
UECHTERLEIN & WEISER, supra note 30, at 132.
85
See generally id. at 133 (“These peering and transit agreements are completely unregulated. Neither the
FCC nor any other governmental authority regulates the prices that a larger backbone network may charge
a smaller one for transit services or mandates that backbone providers interconnect at all.”).
86
As one commentator notes:
Currently, there are no domestic or international industry-specific regulations that govern
how Internet backbone providers interconnect to exchange traffic, unlike other network
26
backbones connect to each other under one of two types of arrangements. In a “peering”
arrangement, backbones of similar size engage in a barter arrangement in which
backbone A carries traffic for backbone B in exchange for backbone B carrying a similar
amount of traffic for backbone A. In this arrangement, exchanged traffic generally is
destined only for the other backbone’s end users.
In a “transit” arrangement, a smaller
backbone pays a larger backbone to carry its customers’ traffic to all end users on the
Internet.
87
To date, market forces have encouraged interconnection among backbones
and between backbones and last-mile ISPs.
88
Today, these backbones make up the core or “middle” of the Internet. Generally,
individual backbone networks are made up of a multiplicity of redundant, high-speed,
high-capacity, long-haul, fiber-optic transmission lines that join at hubs or points of
interconnection across the globe.
89
Transmission over the backbone is generally reliable
even when one component fails because there are multiple different routes of
transmission from one computer to another.
90
A backbone’s customers include ISPs
providing last-mile connectivity to end users, providers of content and applications that
wish to connect their computer servers directly to a backbone, and specialized companies
that lease space on shared or dedicated computer servers to smaller content and
applications providers.
3. Providers of Content and Applications
Millions of organizations and individuals connected to the Internet’s edges
provide an ever-expanding universe of content and applications to end users.
Commercial entities and other organizations provide a large portion of such content and
applications, but individuals are increasingly contributing content and applications to the
Internet for personal, social, and creative purposes.
91
services, such as long distance voice services, for which interconnection is regulated.
Rather, Internet backbone providers adopt and pursue their own interconnection policies,
governed only by ordinary laws of contract and property, overseen by antitrust rules.
Kende, supra note 50, at 2.
87
See generally NUECHTERLEIN & WEISER, supra note 30, at 132-33.
88
Cf. Ryan, Tr. I at 237.
89
NUECHTERLEIN & WEISER, supra note 30, at 131-38. See also Li Yuan & Gregory Zuckerman, Level 3
Regains Luster Amid Web-Video Boom, W
ALL ST. J., Dec. 21, 2006, at C1 (providing a map of Level 3’s
fiber-optic backbone). Today, major U.S. backbone operators include: Verizon, AT&T, Global Crossing,
Level 3, Qwest, SAVVIS, and Sprint-Nextel.
90
COMER, supra note 24, at 137-42.
91
Popular examples include: Blogger.com (Web logs); flickr.com (photo sharing); YouTube.com (audio
and video files); and MySpace.com (social networking pages, Web logs, photo sharing, audio and video
files). See also Lev Grossman, Time’s Person of the Year: You, T
IME, Dec. 25, 2006, at 38, available at
http://www.time.com/time/magazine/article/0,9171,1569514,00.html
.
27
Content and applications providers use various methods to distribute their
offerings over the Internet. Smaller organizations and individuals typically lease space
on a shared or dedicated computer server from a specialized company that provides a
connection to the wider Internet, typically through a negotiated agreement with a
backbone operator.
92
Large companies may build their own server farms with direct
access to an Internet backbone.
93
Some companies also provide Web sites where users
can post self-generated content, such as photos, blogs, social networking pages, and audio
and video files, while the companies themselves manage the site’s underlying technical
aspects.
94
Increasingly, content and applications providers are also copying their content
and applications to multiple computer servers distributed around the world, a technique
called local caching.
95
This practice allows data to be transmitted to end users more
quickly, over a shorter physical distance, and using fewer routers. This strategy, in turn,
generally decreases the potential for transmission problems such as the delay or dropping
of data packets.
96
Today, many applications can be delivered from a provider’s computer server via
the Internet to a customer’s computer and installed automatically. This ability to transmit
applications cheaply and directly to end users allows applications providers to update
their programs frequently and to deliver new versions to customers quickly. Likewise,
the Internet allows content providers to transmit cheaply an expanding array of content,
such as music and video downloads.
Originally, most Web content consisted of static text and graphics files that could
be viewed graphically using a basic Web browser and a narrowband connection. Some
of the newest content and applications, however, are time-sensitive, bandwidth-intensive,
or both. VoIP, for example, is sensitive to both “latency” – the amount of time it takes a
packet of data to travel from source to destination – and “jitter” – on-again, off-again
92
See, e.g., TheHostingChart, http://www.thehostingchart.com (last visited June 22, 2007).
93
See, e.g., Pepper, Tr. I at 93. Pepper notes that “a lot of these large providers made enormous
investments in big server farms to bring content closer to consumers with their caching servers. Bringing
content closer to consumers reduces the need to go across multiple hops [between networks].” Id. See also
Yoo, supra note 59, at 1881-83; John Markoff & Saul Hansell, Hiding in Plain Sight, Google Seeks More
Power, N.Y.
TIMES, June 14, 2006, at A1, available at
http://www.nytimes.com/2006/06/14/technology/14search.html?ei=5090&en=d96a72b3c5f91c47&ex=130
7937600.
94
See supra note 91.
95
Content and applications providers may construct multiple server farms in various locations. See supra
note 93. Alternatively, they can contract with a third party to manage this function. See, e.g., Misener, Tr.
II at 191 (“Essentially, you have a company that has set up edge serving facilities. That is to say server
farms outside major metropolitan areas.”). See also Yoo, supra note 59, at 1881-83; William C. Symonds,
Traffic Cops of the Net, B
US. WK., Sept. 25, 2006, at 88, available at
http://www.businessweek.com/magazine/content/06_39/b4002094.htm
(profiling third-party content
distribution company Akamai Technologies).
96
See Pepper, Tr. I at 93; Yoo, supra note 59, at 1882.
28
delay associated with bursts of data traffic.
97
High-resolution video files and streaming
video applications are examples of bandwidth-intensive content and applications that
some observers suggest are already challenging the Internet’s capacity.
98
C. Network Management, Data Prioritization, and Other Forms of Data
“Discrimination”
The differential treatment of certain data packets by network operators, such as
prioritizing some packets over others, is often referred to as data “discrimination.”
99
This
Section addresses Internet congestion (one of the primary reasons cited for engaging in
such data discrimination), the various types and uses of data discrimination, and the
feasibility of end users detecting and avoiding certain types of data discrimination.
1. Internet Congestion
As explained above, the problem of network congestion has been recognized
since the Internet’s earliest days. Network resources such as computer processing power,
transmission media, and router buffer memory are finite, like other resources.
Congestion, therefore, can occur at any point on the Internet. Of course, end users can
purchase more powerful computers and network operators can expand the capacity of
their networks, but the computers, physical transmission media, and routers that comprise
the Internet can still transport and process only a certain amount of data at any given
time. Although it happens rarely, if too many computers send bursts of packets at the
same time, a network may become temporarily overloaded.
The TCP/IP protocol generally has enabled the Internet to function at a workable
level, even as Internet use has undergone tremendous growth during the last decade.
100
Nonetheless, Internet transmissions are still subject to variable performance and periods
of congestion. Some observers suggest that the use of bandwidth-intensive applications
like certain peer-to-peer file-sharing protocols by even a small minority of users is
already consuming so many network resources as to be worrisome. This situation is of
particular concern to some experts, who believe that the use of such applications by even
a small portion of Internet users may effectively degrade service for the remaining
97
See, e.g., Blumenthal & Clark, supra note 60, at 72-73; GOLDFARB, supra note 73, at 2-3 & n.4.
98
See, e.g., GOLDFARB, supra note 73, at 3-4.
99
“Unfortunately, engineers, economists, and lawyers have different definitions for discrimination.” Peha,
supra note 36, at 3. Some technology experts distinguish between so-called “minimal” or “needs-based”
discrimination, where packets are discarded or otherwise treated differently only when absolutely necessary
(as in the case of congestion), and “non-minimal” or “active” discrimination, where packets are treated
differently for some other, discretionary reason. See, e.g., Felten, supra note 36, at 4. The introduction to
Chapter IV below includes a discussion of how we use the term “discrimination” in analyzing the potential
effects on consumer welfare of various conduct by ISPs and other network operators.
100
COMER, supra note 24, at 165-69.
29
majority of end users.
101
Some observers suggest that such applications are already
testing the Internet’s existing capacity and may even potentially crash the Internet, or
parts of it.
102
2. Alleviating Internet Congestion
Several techniques have been used to alleviate short-term Internet congestion.
Non-linear packet switching enables data to be dispersed and, in turn, allows networks to
reroute individual data packets around points of congestion and avert delays. The TCP
component of the TCP/IP suite also monitors delays and slows the packet-transmission
rates accordingly.
103
Some applications, however, such as certain peer-to-peer file-
sharing protocols, operate in a different manner. When congestion occurs, these
applications do not slow their rates of data transmission. Rather, they aggressively take
advantage of TCP’s built-in reduction mechanism and, instead, send data as fast as they
can.
104
Therefore, some networks have actively restricted or blocked altogether these
kinds of applications, on the grounds that the networks need to preserve an equitable
level of service for the majority of their end users.
Networks may also use “hot potato” routing policies that hand off to other
networks at the earliest possible point data that is not destined for termination on their
own networks, thus reducing the use of network resources.
105
Local caching of data by
content and applications providers further helps to alleviate congestion by reducing the
101
According to Peha, “[t]raffic from a very small number of users can dominate the network and starve
everybody else out. Peer-to-peer, in particular, is a problem today, and other applications might come
along. ” Peha, Tr. I at 22. See also S
ANDVINE, INC., NETWORK NEUTRALITY: A BROADBAND WILD WEST?
4 (2005), available at http://www.sandvine.com/general/getfile.asp?FILEID=37
(reporting that it is
common for less than 20% of users/applications/content to consume 80% of a network’s resources);
A
NDREW PARKER, CACHELOGIC, P2P IN 2005 (2005), available at
http://www.cachelogic.com/home/pages/studies/2005_01.php
(reporting that in 2004 peer-to-peer traffic
constituted 60% of overall Internet data traffic and 80% of upstream data traffic); Press Release, Sandvine,
Inc., EDonkey – Still King of P2P in France and Germany (Sept. 13, 2005), available at
http://www.sandvine.com/news/pr_detail.asp?ID=88
(reporting that P2P file-sharing traffic in the UK and
North America represents up to 48% of all downstream bandwidth and 76% of all upstream traffic).
102
See, e.g., Brenner, Tr. II at 99 (recounting that “[w]e all know the famous story of downloading the
Victoria’s Secret streaming video when so much demand was placed on it, nobody could get a download”).
Beyond this oft-cited example, however, staff has not been presented with any specific evidence of an
instance where a significant portion of the Internet has substantially crashed, apart from general examples
of temporary network congestion. See also D
ELOITTE TOUCHE TOHMATSU, supra note 62, at 4.
103
TCP sends and receives acknowledgements each time a packet is sent to and received from a computer.
Also, TCP automatically starts a timer whenever a computer sends a packet. The timed period depends on
the distance to the recipient computer and delays on the Internet. If the timer runs out before the sending
computer receives an acknowledgement, TCP retransmits the packet and lengthens the timed period to
accommodate the network delay, effectively slowing the transmission rate. Once enough computers in the
network slow down, the congestion clears. See C
OMER, supra note 24, at 140-41.
104
Peha, supra note 36, at 7.
105
NUECHTERLEIN & WEISER, supra note 30, at 132.
30
distance over which data must travel and the number of routers that might potentially
delay or drop packets. In addition, as discussed below, some networks have proposed
prioritizing data and providing other new types of quality-of-service assurances to
alleviate the effects of congestion.
3. Packet-inspection and Flow-control Technologies
To treat some data packets differently than others, as opposed to simply using a
first-in-first-out and best-efforts approach, a network operator must be able to identify
certain relevant characteristics of those packets.
106
One source of identifying information
is the packet’s header, which contains the IP address of its source and destination. The
packet header also contains several types of information that suggest the type of
application required to open the data file, such as the source and destination port
numbers, the transport protocol, the differentiated service code point or traffic class, and
the packet’s length.
107
Additionally, the header contains the Media Access Control
(“MAC”) address of the packet’s source and destination, which provides information
about the manufacturer of the device attached to the network.
108
In recent years, router manufacturers have refined packet-inspection technologies
to provide network operators with a wide range of information about the data traffic on
their networks, including information not provided in packet headers.
109
These
technologies were developed in part to help local area networks direct traffic more
efficiently and to thwart security risks.
110
Deep packet inspection may also be
implemented on the Internet to examine the content of packet streams – even search for
keywords in text – and to take action based on content- or application-specific policies.
111
Such actions could involve tracking, filtering, or blocking certain types of packet streams.
Further, deep packet inspection can map the information it accumulates to databases
containing, for instance, demographic or billing information.
112
106
Peha, supra note 36, at 3 (discussing the criteria that networks can consider when deciding how to
prioritize packets).
107
Id. at 4. Some computer scientists believe that port numbers have become an unreliable tool for
determining a packet’s associated application. According to Peha, “[o]nce upon a time, you could learn
who the application was, through something called a port number, but that hasn't been reliable or
meaningful for a number of years.” Peha, Tr. I at 18.
108
Peha, supra note 36, at 4.
109
See, e.g., Pepper, Tr. I at 83-87.
110
E.g., Tim Greene, The Evolution of Application Layer Firewalls, NETWORK WORLD, Feb. 2, 2004,
available at http://www.networkworld.com/news/2004/0202specialfocus.html
(“Now the latest Internet
defense technology – deep packet inspection firewalls – is being touted as the best line of defense against
worms that can sneak past earlier technology to wreck havoc in corporate networks.”).
111
Peha, supra note 36, at 4-5.
112
Id.
31
Another relatively new technology that may be implemented to reveal information
about packet streams is flow classification.
This technology monitors the size of packets
in a data stream, the time elapsed between consecutive packets, and the time elapsed
since the stream began, with the goal of making reasonable determinations about the
nature of the packets in the stream. Thus, flow classification may reveal information
about a packet stream even if the individual packets themselves are encrypted against
packet inspection.
113
With the development of these two technologies, it is now cost-
effective for a network operator to gain extensive knowledge about the nature of the data
traveling across its network.
114
4. Data Prioritization and Other Forms of Data Discrimination
Recently, some network operators have suggested that they would like to use
these new technologies to prioritize certain data traffic or to provide other types of
quality-of-service assurances to content and applications providers and/or end users in
exchange for a premium fee.
115
In contrast to the practice of transmitting data on a first-
in-first-out and best-efforts basis, network operators could use a router algorithm to favor
the transmission of certain packets based on characteristics such as their source,
destination, application type, or related network attachment. One or more of these
strategies could be employed to manage network traffic generally. Or, they might be
used by a network operator to actively degrade certain non-favored traffic.
Packets going to or from certain favored addresses could be given priority
transmission. Likewise, network operators could give priority to packets for latency-
sensitive applications such as VoIP or network video games. In the alternative, routers
could be programmed to reroute, delay, or drop certain packets.
116
For example, a
network operator could block packets considered to be a security threat.
117
It could drop
or otherwise delay packets associated with unaffiliated or otherwise disfavored users,
content, or applications.
118
A network could apply such treatment only in certain
113
Id. at 4. For example, if a network operator detects a steady stream of packets flowing at 30 Kbps across
its network for a period of time, it might conclude those packets are part of a VoIP telephony transmission.
Id.
114
Id. at 5.
115
See supra note 64. Quality of service “typically involves the amount of time it takes a packet to traverse
the network, the rate at which packets can be sent, and the fraction of packets lost along the way.” Peha,
supra note 36, at 5.
116
E.g., Peha, supra note 36, at 4-6.
117
E.g., Craig McTaggart, Was the Internet Ever Neutral?, 34th Research Conference on Communication,
Information, & Internet Policy 9 (2006), available at
http://web.si.umich.edu/tprc/papers/2006/593/mctaggart-tprc06rev.pdf
(discussing blocking as a tool to
control network abuse).
118
E.g., Peha, supra note 36, at 12–13 (describing scenarios in which network operators might block rival
services, specific content, or software).
32
circumstances, such as during periods of congestion, after a quota of packets has been
met, or, until certain usage fees are paid.
119
Some observers, however, question whether
implementing wide-scale prioritization or similar schemes across multiple networks
having differing technical characteristics is, in fact, even technically possible.
120
Network operators also could provide separate physical or logical channels for
different classes of traffic.
121
Another method for favoring certain Internet traffic is to
reserve capacity on last-mile bandwidth for certain packet streams to provide a minimum
level of quality.
122
Similarly, a network operator could limit the amount of bandwidth
available to an end user, thereby degrading or effectively blocking altogether the use of
119
See, e.g., id. at 5-6.
120
See, e.g., Alcatel-Lucent, Public Comment 1. According to Alcatel-Lucent, an opponent of network
neutrality regulation:
[I]ndustry standards would have to be adopted that put in place common policies for the
labeling and prioritization of data packets. . . . The vast majority of Internet traffic must
traverse the networks of numerous broadband service providers. This means that in order
to favor the traffic of Service A over Service B during its entire trip through the Internet,
each service provider and backbone network would have to prioritize and label packets in
exactly the same way – a scenario that does not exist today. The idea that a service
provider could maintain priority routing for its “preferred data packets” between a user in
Washington, DC and Los Angeles, CA is not possible absent a comprehensive agreement
between all network service providers to treat and identify data packets based on a
common standard not currently in existence. Absent such developments, the data would
almost certainly change hands at least once, likely stripping it of any prioritization it
might have enjoyed inside the network of a sole provider.
Id. at 5. Likewise, a representative of Google, a network neutrality proponent, states that:
[L]ast mile providers who want to give some sort of priority service, you know, only
have control over their own network. It’s not obvious to us how you can offer this kind
of end-to-end service. It’s not obvious to us how you identify the traffic in order to
segregate it, that you’re going to give priority to. And how do you do this segregation
without degrading other traffic?
Davidson, Tr. I at 230-31.
121
For example, a network operator could physically send favored data traffic over a lightly used
connection, while sending other data traffic over a more heavily used connection. Or, the network could
use logical separation to send traffic on the same physical connection, but use different service flows, as in
the case of a virtual local network (“VLN”). Peha, supra note 36, at 6.
122
For example, AT&T’s Project Lightspeed and Verizon’s FiOS services reserve portions of last-mile
bandwidth for their proprietary video services. G
OLDFARB, supra note 73, at 10-11, 17-18. These network
operators also could sell reserved capacity to content or applications providers in return for a quality-of-
service guarantee. Verizon, for example, has such plans for its FiOS service. Id. at 10.
33
bandwidth-intensive content or applications.
123
A network operator also could treat data
packets differently by providing preferential access to services, such as local caching.
124
Data also can be treated differently through the use of pricing structures, such as
service tiers, to provide a certain quality-of-service level in exchange for payment.
125
In
a fee-for-priority system, content and applications providers and/or end users paying
higher fees would receive quicker, more reliable data transmissions. Sometimes, such an
arrangement is referred to as a “fast lane.” Other data might simply be provided on a
best-efforts basis. Similarly, a network operator might assess fees to end users based on
their behavior patterns, a practice sometimes referred to as “content billing” or “content
charging.”
126
5. Detecting Data Discrimination
127
Although differential data treatment may be easy to detect in some instances, like
outright blocking, in many instances it may be more difficult for an end user to
distinguish between performance problems resulting from deliberate discrimination and
problems resulting from other, more general causes.
128
For example, an end user whose
Internet traffic is treated differently than other traffic might experience poor performance
in one or more aspects, such as delays in transmitting data, delays in using applications,
or sporadic jitter. Such effects, however, can also result from general network
123
See Network Neutrality: Competition, Innovation, and Nondiscriminatory Access: Hearing Before the S.
Comm. on Commerce, Sci., & Transp., 109th Cong. 13 (2006) (testimony of Earl W. Comstock, President
and CEO, COMPTEL), available at http://www.digmedia.org/docs/comstock-020706.pdf
.
124
Id. at 14.
125
Peha, supra note 36, at 6.
126
Id.
127
The difficulties associated with end-user detection of data discrimination discussed in this Section would
appear to be equally applicable to enforcement of any network neutrality regulation that prohibited data
discrimination by ISPs and other network operators.
128
See, e.g., Pepper, Tr. I at 93. According to Pepper:
[T]here are techniques that consumers actually have readily available to them to test their
own bandwidth and performance latency between . . . the home, or the office, and the
first POP [(point of presence)], right?
And so, those techniques are actually relatively available. The problem is that,
depending on the service you’re trying to download, the application that you’re using, it
may – you may be going through two or three hops [between networks], or as many as a
dozen hops across the Internet. When you go across multiple hops across multiple
networks, it’s more difficult for a consumer to know.
Id. See also Brenner, Tr. II at 98 (“[T]here are many points between the key strokes of the customer and
the download in which the speed can be affected.”).
34
congestion.
129
Distinguishing the two may be particularly difficult for end users not
possessing a technical background. Researchers, however, are working to develop
diagnostic tools to detect the differential treatment of data.
130
6. Potential End-user Responses to Data Discrimination
a. Bypassing Discriminatory Networks
Some computer experts have suggested that the prospect of networks treating
some data differently than others might give rise to a kind of arms race between network
operators seeking to employ technical measures to manage their networks and end users
seeking to employ countermeasures to avoid them.
131
They suggest, for example, that
end users can bypass networks to a limited degree through cooperative access sharing.
132
On a small scale, a group of neighbors with access to multiple, distinct broadband
Internet service providers might each set up an open-access Wi-Fi router, giving
everyone in the group access to each other’s service provider. If one provider engages in
data discrimination, members of the cooperative could bypass it by accessing the Internet
through another provider in the pool. Such a strategy, however, depends on a last-mile
network operator allowing the use of open-access Wi-Fi access points in the first place.
133
To the extent that last-mile networks allow the resale of their services through open-
access wireless networks, competition from resellers might have a similar effect.
134
Alternatively, a municipality might set up its own wireline or wireless network if its
residents are not satisfied with the service provided by private providers. It is
conceivable, however, that a municipal network could also engage in certain practices
that some of its residents consider to be discriminatory.
135
129
See, e.g., Felten, supra note 36, at 4.
130
Robert McMillan, Black Hat: Researcher Creates Net Neutrality Test, COMPUTERWORLD, Aug. 2, 2006,
available at
http://www.computerworld.com/action/article.do?command=viewArticleBasic&articleId=9002154
.
131
See generally William H. Lehr et al., Scenarios for the Network Neutrality Arms Race, 34th Research
Conference on Communication, Information, & Internet Policy (2006), available at
http://web.si.umich.edu/tprc/papers/2006/561/TPRC2006_Lehr%20Sirbu%20Peha%20Gillett%20Net%20
Neutrality%20Arms%20Race.pdf. See also Lehr, Tr. I at 52.
132
Lehr et al., supra note 131, at 10-13. See also Lehr, Tr. I at 41-43.
133
Lehr et al., supra note 131, at 10-13.
134
Id. at 13-14 (describing the Wi-Fi resale business model of FON); Lehr, Tr. I at 42-43. See also FON,
What’s FON, http://www.fon.com/en/info/whatsFon (last visited May 14, 2007).
135
Lehr et al., supra note 131, at 15; Lehr, Tr. I at 43.
35
b. Technical Measures to Counter Data Discrimination
Countering data discrimination, like detecting it in the first place, may be
difficult, especially for end users without technical backgrounds. Several technical
measures to counter data discrimination do exist, however, at least to a limited degree.
Several potential methods for circumventing applications-based degradation or blocking
involve the computer port numbers that typically indicate which software application a
computer should use to open a packet. Computer users and applications developers can
prevent networks from identifying the application associated with a packet by employing
port numbers not commonly associated with a particular application or by assigning and
reassigning port numbers dynamically.
136
Alternatively, applications developers can use
TCP port 80, the number used by most hypertext transfer protocol (“HTTP”) traffic and,
thus, potentially make an application’s traffic indistinguishable from most other Web
browser-based traffic.
137
To evade differential treatment based on a sender or receiver’s IP address, an end
user could access information from the Internet through a proxy that reroutes data
through another server, camouflaging its source and destination.
138
Likewise, packets
might be encrypted so that a network cannot use packet inspection to identify their
contents or related application.
139
Such encrypted packets could also be transmitted
through a VPN to a gateway computer outside the ISP’s network, where the packets
could be decrypted and forwarded to their recipient.
140
In such a scenario, the last-mile
ISP would see only streams of encrypted packets traveling from the end user through the
VPN, thus preventing the ISP from identifying the computers with which the sender is
communicating.
141
Some ISPs have responded to these measures by banning the use of
VPNs and encryption protocols or charging a fee for their use.
142
Alternatively, a
network might simply relegate or drop altogether encrypted packets when it cannot
identify their contents.
An alternate encryption system called “onion routing” conceals packets’ content,
source, and destination without the use of a VPN. A packet is enveloped in several layers
of encryption and then sent through a special network of links and unique routers called
136
Lehr et al., supra note 131, at 19-20. See also Lehr, Tr. I at 45-46.
137
Lehr et al., supra note 131, at 20-21.
138
Id.
139
For example, some P2P software has been rewritten using the Internet IP Security protocol (“IPSec”) to
encrypt everything in the packets except the IP header. Id.
140
Felten, supra note 36, at 8-9.
141
Id.
142
Lehr et al., supra note 131, at 22.
36
“routing anonymizers” or “onion routers.”
143
A layer of encryption is removed at each
router until the packet is stripped of encryption and delivered to its destination. Onion
routing prevents network operators from knowing who is communicating with whom,
and the content of the communication is encrypted up to the point where the traffic leaves
the onion-routing network.
144
Even with encryption, however, a network might be able to infer the type of
packet through flow classification and continue to target certain packets for
discrimination.
145
An end user might try to evade flow classification by altering the size
and timing of packets, adding blank packets to the flow, or mixing packets from multiple
flows.
146
A network might respond, however, by degrading or blocking all of the user’s
traffic or by manipulating that traffic in a way that affects one type of application much
more than it does other types of traffic.
147
Alternatively, end users might be able to offset the effects of certain kinds of
discrimination to some extent by using buffering techniques to preload data streams into
a computer’s memory and then accessing them after a period of time, thereby alleviating
problems with latency or jitter. Such techniques, however, may not be useful for real-
time applications like VoIP and streaming video.
148
In some circumstances, caching
content closer to end users might also effectively circumvent discriminatory practices that
are implemented further into the core of the Internet.
149
* * *
The text above provides historical and technical background regarding the
Internet to help inform the policy discussion in this Report. In the next Chapter, we
address the jurisdiction of the relevant federal agencies in the area of broadband Internet
access, as well as the legal and regulatory developments that have prompted the current
debate over network neutrality.
143
Id.
144
Id. See also generally U.S. Navy, Onion Routing: Executive Summary, http://www.onion-
router.net/Summary.html (last visited June 15, 2007).
145
Felten, supra note 36, at 8-9; Lehr et al., supra note 131, at 23; see Peha, supra note 36, at 4.
146
Lehr et al., supra note 131, at 23.
147
Felten, supra note 36, at 9.
148
Lehr, Tr. I at 48-49.
149
Id. at 49.
37
II. LEGAL AND REGULATORY BACKGROUND AND DEVELOPMENTS
If recent years have seen considerable change in the development and deployment
of platforms for broadband Internet access, they also have seen considerable flux in the
field of broadband regulation. A comprehensive review of federal and state law issues
pertinent to the provision of broadband Internet access would go well beyond the scope
of this Report.
150
This Chapter, however, provides a basic legal and regulatory
framework for the policy discussion to follow in the remainder of the Report. To that
end, it sketches the central elements of FTC (in Section A) and FCC (in Section B)
jurisdiction over broadband services, including the statutory bases of that jurisdiction.
This Chapter also reviews (in Section C) certain decisions of the courts and the agencies,
including recent enforcement activity, rulemaking, and policy statements that have served
to clarify both jurisdictional and substantive questions about broadband Internet access.
In brief, federal regulatory jurisdiction over broadband services generally is
subject to the shared jurisdiction of the FCC, the FTC, and the DOJ. FCC jurisdiction
comes chiefly from the Communications Act,
151
which established the FCC and provides
for the regulation of “interstate and foreign commerce in communication by wire and
radio.”
152
FTC jurisdiction over broadband services comes chiefly from its statutory
mandate to prevent “unfair methods of competition” and “unfair or deceptive acts or
practices in or affecting commerce” under the FTC’s enabling legislation, the FTC Act.
153
The FTC’s authority to enforce the federal antitrust laws generally is shared with DOJ’s
Antitrust Division.
154
150
For a more detailed treatment of the pertinent legal background, see, e.g., PETER W. HUBER ET AL.,
FEDERAL TELECOMMUNICATIONS LAW (2d ed. 1999) (especially Chapters 3, 10-12, Supp. (2005), and
Supp. (2006)). See also NUECHTERLEIN & WEISER, supra note 30 (discussing Internet commerce, policy,
and law).
151
47 U.S.C. §§ 151 et seq. Significant amendments to the Communications Act of 1934, 48 Stat. 1064
(1934), were imposed by the Telecommunications Act of 1996. See Pub. L. No. 104-104, 110 Stat. 56
(1996). Although broad in scope, the Telecommunications Act of 1996 did not replace the
Communications Act, but amended it.
152
47 U.S.C. § 151.
153
15 U.S.C. §§ 41 et seq. Although the FTC Act is central to the FTC’s jurisdiction over broadband
Internet access, and competition and consumer protection issues generally, it is not the only statutory basis
of FTC authority pertinent to the larger Internet debate. With regard to competition concerns, the FTC is
also charged under, for example, the Clayton Act (15 U.S.C. §§ 12-27); the Hart-Scott-Rodino Antitrust
Improvements Act of 1976 (15 U.S.C. § 18a) (amending the Clayton Act); and the International Antitrust
Enforcement Assistance Act of 1994 (15 U.S.C. §§ 46, 57b-1, 1311, 1312, 6201, 6201 note, 6202-6212).
154
The FTC and DOJ share antitrust authority with regard to most areas of the economy. The two antitrust
agencies have long-standing arrangements, first established in 1948, that allow them to avoid inconsistent
or duplicative efforts. See infra notes 218-19 for a discussion of various DOJ merger reviews in the area of
Internet broadband access.
38
A. FTC Jurisdiction under the FTC Act
The FTC Act gives the FTC broad authority with regard to both competition and
consumer protection matters in most sectors of the economy.
155
Under the FTC Act,
“[u]nfair methods of competition in or affecting commerce, and unfair or deceptive acts
or practices in or affecting commerce,” are prohibited,
156
and the FTC has a general
statutory mandate “to prevent persons, partnerships, or corporations,” from engaging in
such prohibited methods, acts, and practices.
157
At the same time, the FTC Act cabins this general grant of statutory authority
with regard to certain activities. In particular, the FTC’s enforcement authority under the
FTC Act does not reach “common carriers subject to the Communications Act of 1934,”
as amended.
158
An entity is a common carrier, however, only with respect to services that
it provides on a common carrier basis.
159
As discussed below in Chapter II.C, because
most broadband Internet access services are not provided on a common carrier basis, they
are part of the larger economy subject to the FTC’s general competition and consumer
protection authority with regard to methods, acts, or practices in or affecting commerce.
Exercising its statutory authority over competition matters, the FTC has, where
appropriate, investigated and brought enforcement actions in matters involving access to
content via broadband and other Internet access services. For example, the FTC
challenged the proposed merger between America Online (“AOL”) and Time Warner, on
the basis that the merger threatened to harm competition and injure consumers in several
markets, including those for broadband Internet access and residential Internet transport
services (i.e., “last mile” access).
160
The consent order resolving the agency challenge
required the merged entity to open its cable system to competitor Internet service
155
The FTC’s authority is defined broadly to deal with “methods . . . acts or practices in or affecting
commerce.” 15 U.S.C. § 45(a)(2). But for certain limited market sectors that are expressly excluded from
the FTC’s enforcement authority, and for the areas in which FTC jurisdiction over various market sectors is
shared, the FTC’s authority ranges broadly over “commerce,” without restriction to particular segments of
the economy. See id. (FTC authority generally; express exclusion for, e.g., common carriers); supra note
154 and accompanying text (shared FTC/DOJ antitrust authority).
156
15 U.S.C. § 45(a)(1). In 1994, Congress defined an “unfair” act or practice over which the FTC has
authority as one that “causes or is likely to cause substantial injury to consumers which is not reasonably
avoidable by consumers themselves and not outweighed by countervailing benefits to consumers or to
competition.” Id. § 45(n).
157
Id. § 45(a)(2).
158
Id.
159
47 U.S.C. § 153(44) (provider of telecommunications services deemed a common carrier under the
Communications Act “only to the extent that it is engaged in providing telecommunications services”).
160
Am. Online, Inc. & Time Warner, Inc., FTC Dkt. No. C-3989 (Dec. 17, 2000) (complaint), available at
http://www.ftc.gov/os/2000/12/aolcomplaint.pdf
.
39
providers on a non-discriminatory basis, for all content.
161
The order also prevented the
company from interfering with the content of non-affiliated ISPs or with the ability of
non-affiliated providers of interactive TV services to access the AOL/Time Warner
system.
162
Moreover, the order required the company, in areas where it provided cable
broadband service, to offer AOL’s DSL service in the same manner and at the same retail
pricing as in areas where it did not provide cable broadband service.
163
The FTC has addressed Internet access and related issues in a number of other
merger investigations as well.
164
For example, the FTC investigated the acquisition by
Comcast and Time Warner of the cable assets of Adelphia Communications and, in a
related matter, the exchange of various cable systems between Comcast and Time
Warner. In the course of that investigation, the FTC examined, among other things, the
likely effects of the transactions on access to and pricing of content. The investigation
eventually was closed because a majority of the Commission concluded that the
acquisitions were unlikely to foreclose competition or result in increased prices.
165
In addition to such competition issues are various consumer protection issues that
have been raised in the larger Internet access context. Over the past decade, the FTC has
brought a variety of cases against Internet service providers that have engaged in
allegedly deceptive marketing and billing practices.
166
For example, in 1997, the FTC
separately sued America Online, CompuServe, and Prodigy, alleging that each company
had offered “free” trial periods that resulted in unexpected charges to consumers.
167
One
Prodigy advertisement, for example, touted a “Free Trial” and “FREE 1
ST
MONTH’S
MEMBERSHIP” conspicuously, while a fine print statement at the bottom of the back
panel of the advertisement stipulated: “Usage beyond the trial offer will result in extra
161
Id. (Apr. 17, 2001) (consent order), available at http://www.ftc.gov/os/2001/04/aoltwdo.pdf.
162
Id.
163
Id.
164
See, e.g., Cablevision Sys. Corp., 125 F.T.C. 813 (1998) (consent order); Summit Commun. Group, 120
F.T.C. 846 (1995) (consent order).
165
See Statement of Chairman Majoras, Commissioner Kovacic, and Commissioner Rosch Concerning the
Closing of the Investigation into Transactions Involving Comcast, Time Warner Cable, and Adelphia
Communications (Jan. 31, 2006) (FTC File No. 051-0151); see also Statement of Commissioners Jon
Leibowitz and Pamela Jones Harbour (Concurring in Part, Dissenting in Part), Time
Warner/Comcast/Adelphia (Jan. 31, 2006) (FTC File No. 051-0151). Both statements are available at
http://www.ftc.gov/opa/2006/01/fyi0609.htm
.
166
See, e.g., Am. Online, Inc. & CompuServe Interactive Servs., Inc., FTC Dkt. No. C-4105 (Jan. 28, 2004)
(consent order), available at http://www.ftc.gov/os/caselist/0023000/0023000aol.shtm
; Juno Online Servs.,
Inc., FTC Dkt. No. C-4016 (June 25, 2001) (consent order), available at
http://www.ftc.gov/os/caselist/c4016.shtm
.
167
See Am. Online, Inc., FTC Dkt. No. C-3787 (Mar. 16, 1998) (consent order), available at
http://www.ftc.gov/os/1997/05/ameronli.pdf; CompuServe, Inc., 125 F.T.C. 451 (1998) (consent order);
Prodigy, Inc., 125 F.T.C. 430 (1998) (consent order).
40
fees, even during the first month.”
168
Other alleged misrepresentations included AOL’s
failure to inform consumers that fifteen seconds of connect time was added to each online
session (in addition to the practice of rounding chargeable portions of a minute up to the
next whole minute),
169
as well as its misrepresentation that it would not debit customers’
bank accounts before receiving authorization.
170
The settlement orders in these matters
prohibited the companies from, among other things, misrepresenting the terms or
conditions of any trial offer of online service. Although all three matters involved dial-
up, or narrowband, Internet access, the orders are not limited by their terms to
narrowband services.
More recently, in the matter of FTC v. Cyberspace.com,
171
the federal district
court for the Western District of Washington granted summary judgment in favor of the
FTC, finding, among other things, that the defendants had violated the FTC Act by
mailing false or misleading purported rebate or refund checks to millions of consumers
and businesses without disclosing, clearly and conspicuously, that cashing the checks
would prompt monthly charges for Internet access services on the consumers’ and
businesses’ telephone bills. Following a trial on the issue of consumer injury, the court
ordered the defendants to pay more than $17 million to remedy the injury caused by their
fraudulent conduct. The Court of Appeals for the Ninth Circuit affirmed the trial court’s
liability finding last year.
172
In addition, the FTC has brought numerous cases involving the hijacking of
consumers’ modems.
173
For example, in FTC v. Verity International Ltd.,
174
the
Commission alleged that the defendants orchestrated a scheme whereby consumers
seeking online entertainment were disconnected from their regular ISPs and reconnected
to a Madagascar phone number. The consumers were then charged between $3.99 and
168
Prodigy, 125 F.T.C. at 430 exhibit A (complaint). Similar complaints were lodged against America
Online and CompuServe.
169
For example, “an online session of 2 minutes and 46 seconds, with the 15 second supplement, totals 3
minutes and 1 second and is billed as 4 minutes.” Am. Online, FTC Dkt. No. C-3787 at 4 exhibit E
(complaint).
170
See id. at 5-6 exhibit F.
171
No. C00-1806L, 2002 U.S. Dist. LEXIS 25565 (W.D. Wash. July 10, 2002), aff’d, 453 F.3d 1196 (9th
Cir. 2006).
172
Cyberspace.com, 453 F.3d at 1196.
173
A list of FTC enforcement actions involving the Internet and online services generally, and modem
hijacking allegations in particular, can be found at http://www.ftc.gov/bcp/internet/cases-internet.pdf
.
These actions include the following: FTC v. Sheinkin, No. 2-00-3636-18 (D.S.C. 2001); FTC v. RJB
Telcom, Inc., No. CV 00-2017 PHX SRB (D. Ariz. 2000); FTC v. Ty Anderson, No. C 00-1843P (W.D.
Wash. 2000); FTC v. Audiotex Connection, Inc., No. CV-97-0726 (DRH) (E.D.N.Y. 1997).
174
335 F. Supp. 2d 479 (S.D.N.Y. 2004), aff’d in part, rev’d in part, 443 F.3d 48 (2d Cir. 2006), cert.
denied, 127 S. Ct. 1868 (2007).
41
$7.78 per minute for the duration of each connection. In that case, AT&T and Sprint –
which were not parties to the FTC enforcement action – had carried the calls connecting
the consumers’ computers to the defendants’ servers. Consumers were billed at AT&T’s
and Sprint’s filed rates for calls to Madagascar. The defendants therefore argued that the
entertainment service in question was provided on a common carrier basis and thus
outside the FTC’s jurisdiction. One defendant also claimed to be a common carrier itself
and hence beyond FTC jurisdiction. Although both the District Court and the Court of
Appeals rejected those arguments, the FTC had to expend substantial time and resources
litigating the question of jurisdiction.
175
As the Verity case demonstrates, enforcement difficulties posed by the common
carrier exemption are not merely speculative. The FTC regards the common carrier
exemption in the FTC Act as outmoded and, as it creates a jurisdictional gap, an obstacle
to sound competition and consumer protection policy. As the FTC has explained before
Congress, technological advances have blurred traditional boundaries between
telecommunications, entertainment, and high technology.
176
For example, providers
routinely include telecommunications services, such as telephone service, and non-
telecommunications services, such as Internet access, in bundled offerings. As the
telecommunications and Internet industries continue to converge, the common carrier
exemption is likely to frustrate the FTC’s efforts to combat unfair or deceptive acts and
practices and unfair methods of competition in these interconnected markets.
Finally, based on the above discussion of the FTC’s jurisdiction over broadband
services, three general points may be in order. First, as the investigations and
enforcement actions described above suggest, the FTC has both authority and experience
in the enforcement of competition and consumer protection law provisions pertinent to
broadband Internet access. Second, the FTC Act provisions regarding “[u]nfair methods
of competition in or affecting commerce, and unfair or deceptive acts or practices in or
affecting commerce,” are general and flexible in nature, as demonstrated by judicial and
administrative decisions across diverse markets.
177
Third, the FTC’s investigative and
enforcement actions have been party- and market-specific; that is, neither the general
body of antitrust and consumer protection law nor the FTC’s enforcement and policy
record determines any particular broadband connectivity policy or commits the
Commission to favoring any particular model of broadband deployment.
175
In response to a request from the district court, the FCC filed an amicus brief in support of the FTC’s
jurisdiction in this matter. See Verity, 443 F.3d at 56, 61.
176
See FTC Jurisdiction over Broadband Internet Access Services: Hearing Before the S. Comm. on the
Judiciary, 109th Cong. 9-11 (2006) (statement of William E. Kovacic, Comm’r, FTC), available at
http://www.ftc.gov/opa/2006/06/broadband.shtm
.
177
“Congress has deliberately left these phrases undefined so that the parameters of the FTC’s powers and
the scope of its administrative and judicial functions could be responsive to a wide variety of business
practices.” ABA
SECTION OF ANTITRUST LAW, ANTITRUST LAW DEVELOPMENTS 643 & n.4 (6th ed. 2007)
(citing FTC v. Sperry & Hutchinson Co., 405 U.S. 233, 239-44 (1972); FTC v. R.F. Keppel & Bro., 291
U.S. 304, 310-12 (1934)).
42
B. FCC Jurisdiction under the Communications Act
As noted above, FCC jurisdiction over broadband services arises under the
Communications Act.
178
Central to the broadband discussion is a distinction under that
Act between “telecommunications services” and “information services.”
179
The former,
but not the latter, are subject to substantial mandatory common carrier regulations under
Title II of the Communications Act.
180
While not subject to the Title II common carrier
regulations, information services are treated by the FCC as subject to its general,
ancillary jurisdiction under Title I of the Communications Act.
181
Under Title II, providers of telecommunications services are bound to, among
other things, enable functional physical connections with competing carriers,
182
at “just
and reasonable” rates,
183
which the FCC may prescribe,
184
and are prohibited from
178
47 U.S.C. §§ 151 et seq.
179
Under the Communications Act, an “information service . . . means the offering of a capability for
generating, acquiring, storing, transforming, processing, retrieving, utilizing, or making available
information via telecommunications . . . .” 47 U.S.C. § 153(20). In contrast, “‘telecommunications
service’ means the offering of telecommunications for a fee directly to the public . . . regardless of the
facilities used,” id. § 153(46), and “‘telecommunications’ means the transmission, between or among points
specified by the user, of information of the user’s choosing, without change in the form or content of the
information as sent and received.” Id. § 153(43). In brief, to act simply as a transmitter or transducer of
information is to provide a telecommunications service, whereas to act as a transformer of information is to
provide an information service.
180
The Communications Act is divided into seven Titles. See generally 47 U.S.C. §§ 151 et seq. Under
Title I are “General Provisions,” including, for example, the purposes of the Act, definitions, the
establishment of the FCC, and the structure and operations of the FCC. Under Title II are the “Common
Carriers” provisions, including, among others, common carrier regulations and “Universal Service”
requirements. Under Title III are “Provisions Relating to Radio.” Under Title IV are “Procedural and
Administrative Provisions.” Under Title V are “Penal Provisions.” Under Title VI are provisions relating
to “Cable Communications.” Finally, miscellaneous additional provisions are included under Title VII.
181
See, e.g., In re Appropriate Framework for Broadband Access to the Internet Over Wireline Facilities,
20 FCC Rcd 14853, 14914 (2005) (report and order and notice of proposed rulemaking) (“We recognize
that . . . the predicates for ancillary jurisdiction are likely satisfied for any consumer protection, network
reliability, or national security obligation that we may subsequently decide to impose on wireline
broadband Internet access service providers.”). Although the scope of the FCC’s ancillary jurisdiction over
broadband services has not been defined by the courts, it should be noted that the Supreme Court, in dicta,
has recognized the application of the FCC’s ancillary jurisdiction over information service providers. See
Nat’l Cable & Telecomms. Ass’n v. Brand X Internet Servs., 545 U.S. 967, 976 (2005).
182
47 U.S.C. § 201(a).
183
Id. § 201(b).
184
Id. § 205.
43
making “any unjust or unreasonable discrimination in charges, practices, classifications,
regulations, facilities, or services . . . .”
185
There are, however, several important qualifications on these Title II common
carrier requirements. First, the Communications Act expressly provides for regulatory
flexibility to facilitate competition. In particular, with regard to telecommunications
carriers or services, the FCC
shall forebear from applying any regulation or any provision of this Act . .
. if the Commission determines that–(1) enforcement . . . is not necessary
to ensure that the charges, practices, classifications, or regulations . . . are
just and reasonable and are not unjustly or unreasonably discriminatory;
(2) enforcement . . . is not necessary for the protection of consumers; and
(3) forbearance from applying such provision or regulation is consistent
with the public interest.
186
In addition, in determining such “public interest,” the FCC must “consider whether
forbearance from enforcing the provision or regulation promotes competitive market
conditions.”
187
Finally, the Communications Act expressly states that “[i]t shall be the
policy of the United States to encourage the provision of new technologies and services
to the public.”
188
As a consequence, any person “(other than the Commission) who
opposes a new technology or service proposed to be permitted under this Act shall have
the burden to demonstrate that such proposal is inconsistent with the public interest.”
189
C. Regulatory and Judicial Clarification
As noted above, a series of regulatory and judicial decisions have helped to clarify
both the distinction between information and telecommunications services and the status
of broadband services as information services. That clarification is, to an extent, in
tension with early regulatory and judicial attempts to grapple with the novel technologies
that enabled the provision of Internet access. For example, in 1980, the FCC
promulgated rules designed to address, among other things, the growing commerce in
data-processing services available via telephone wires (the “Computer II Rules”).
190
With reference to those rules, the FCC subsequently applied certain common carrier
obligations, such as non-discrimination, to local telephone companies providing early
185
Id. § 202.
186
Id. § 160(a).
187
Id. § 160(b).
188
Id. § 157(a).
189
Id. § 160(b).
190
See In re Amendment of Section 64.702 of the Comm’n’s Rules & Regulations (Second Computer
Inquiry), 77 F.C.C.2d 384, 417-23 (1980) [hereinafter Computer II Rules].
44
DSL services.
191
Further, as recently as 2000, the Court of Appeals for the Ninth Circuit
held that “the transmission of Internet service to subscribers over cable broadband
facilities is a telecommunications service under the Communications Act.”
192
Still, the FCC’s current view that broadband services are information services has
its roots in earlier decisions by the FCC and the courts. The same Computer II Rules that
grounded the early DSL determination distinguished between “basic” and “enhanced”
services and did not subject the latter to Title II common carrier regulation.
193
In the
following decade, the FCC recognized that ISPs provide not just “a physical connection
[to the Internet], but also . . . the ability to translate raw Internet data into information
[consumers] may both view on their personal computers and transmit to other computers
connected to the Internet.”
194
Moreover, the 1998 Universal Service Report regarded
“non-facilities-based” ISPs – those that do not own their own transmission facilities –
solely as information service providers.
195
Indeed, even the Ninth Circuit opinion that
held that ISPs offering cable broadband were offering telecommunications services
recognized that, under the Communications Act and FCC implementing regulations, a
significant portion of those services were information services.
196
In 2000, the FCC issued a Notice of Inquiry to resolve, among other things, the
application of the Communications Act’s information/telecommunications distinction to
cable broadband ISPs.
197
In its subsequent declaratory ruling in 2002, the FCC
concluded that broadband cable Internet access services were information services, not
191
In a 1998 order, the FCC found, among other things, that incumbent local exchange carriers are subject
to various interconnection obligations under Title II of the Communications Act. See In re Deployment of
Wireline Servs. Offering Advanced Telecomms. Capability, 13 FCC Rcd 24011 (1998) (memorandum
opinion and order and notice of proposed rulemaking). The FCC noted that, although DSL and other
advanced services could “also be deployed using other technologies over satellite, cable, and wireless
systems, [it would] limit the discussion here to wireline services, because none of the petitioners raise
issues about these other technologies.” Id. at 24016 n.11. See also GTE Operating Cos. Tariff No. 1, 13
FCC Rcd 22466 (1998).
192
AT&T Corp. v. City of Portland, 216 F.3d 871, 880 (9th Cir. 2000).
193
See Computer II Rules, 77 F.C.C.2d at 428-32.
194
In re Fed.-State Joint Bd. on Universal Serv., 13 FCC Rcd 11501, 11531 (1998).
195
See id. at 11530.
196
See AT&T, 216 F.3d at 877-78.
197
In re Inquiry Concerning High-Speed Access to the Internet Over Cable & Other Facilities, 15 FCC Rcd
19287 (2000) (notice of inquiry). As noted above, this notice of inquiry had been expressly limited in its
application to broadband services provided by local telephone companies over wireline. Prior to 2000, the
FCC had not ruled on the application of common carrier obligations to broadband services provided via
cable. It sought, in this notice of inquiry, “to instill a measure of regulatory stability in the market,” and to
resolve a split in the Circuit courts regarding the regulatory status of “cable modem” broadband services.
See id. at 19288 & n.3 (comparing AT&T, 216 F.3d 871 with Gulf Power Co. v. FCC, 208 F.3d 1263 (11th
Cir. 2000)).
45
telecommunications services, and hence not subject to common carrier regulation under
Title II.
198
In reaching that conclusion, the FCC emphasized the information coding,
storage, and transformation processes that were central to such services, as it had in
concluding that non-facilities-based services were information services in its Universal
Service Report.
199
Moreover, the FCC concluded that there was no principled or
statutory basis for treating facilities-based and non-facilities-based services differently, as
both offered “a single, integrated service that enables the subscriber to utilize Internet
access service . . . .”
200
In response, several parties sought judicial review of the FCC’s determination in a
dispute eventually heard by the Supreme Court, in National Cable &
Telecommunications Association v. Brand X Internet Services (“Brand X”).
201
In Brand
X, the Court upheld the FCC’s determination that cable broadband is an information
service as a reasonable construction of the Communications Act, reversing a Ninth
Circuit decision that had relied on City of Portland as precedent.
202
In the wake of the Brand X decision, the FCC has continued to expand, platform
by platform, upon the broadband policy defended in that case. In 2005, the FCC released
the Appropriate Framework for Broadband Access to the Internet over Wireline
Facilities (“Wireline Order”), in which it reclassified wireline broadband Internet access
service by facilities-based carriers as an information service.
203
That reclassification
pertains to both “wireline broadband Internet access service . . . [and] its transmission
component,”
204
and is independent of the underlying technology employed.
205
The
198
In re Inquiry Concerning High-Speed Access to the Internet Over Cable & Other Facilities, 17 FCC Rcd
4798, 4821-22 (2002) (declaratory ruling and notice of proposed rulemaking).
199
Id. at 4820-23.
200
Id. at 4823.
201
545 U.S. 967 (2005).
202
Id. at 973-74. It should be noted that Brand X is fundamentally a Chevron decision. That is, the Court
did not examine the question of the status of cable broadband services as an abstract or de novo issue of
statutory construction. Rather, the Court held that the FCC’s ruling was – because based on reasonable
policy grounds – a permissible resolution of ambiguous statutory language in the Telecommunications Act
of 1996, given the FCC’s authority under the Communications Act, the Administrative Procedures Act, and
standards of agency deference the Court had articulated in Chevron v. NRDC. See id. at 973 (citing
Chevron, U.S.A., Inc. v. Natural Res. Def. Council, Inc., 467 U.S. 837 (1984) and 5 U.S.C. §§ 551 et seq.).
203
In re Appropriate Framework for Broadband Access to the Internet over Wireline Facilities, 20 FCC
Rcd 14853 (2005) (report and order and notice of proposed rulemaking).
204
Id. at 14856.
205
Id. at 14860 n.15 (“We stress that our actions in this Order are limited to wireline broadband Internet
access service and its underlying broadband transmission component, whether that component is provided
over all copper loops, hybrid copper-fiber loops, a fiber-to-the-curb or fiber-to-the-premises (FTTP)
network, or any other type of wireline facilities, and whether that component is provided using circuit-
switched, packet-based, or any other technology.”).
46
Wireline Order does, however, permit facilities-based wireline carriers to elect to provide
broadband transmission service on a common carrier basis.
206
In 2006, the FCC released an order in which it classified broadband-over-
powerline Internet access services as information services.
207
Also in 2006, the FCC
granted – by operation of law – Verizon’s petition for forbearance from Title II and
Computer Inquiry Rules
208
with respect to its broadband services.
209
Verizon had asked
for forbearance “from traditional common-carriage requirements for all broadband
services,” seeking relief chiefly with regard to certain commercial broadband services not
expressly addressed in the Wireline Order or other rulemaking.
210
Most recently, the FCC clarified more generally the status of wireless services as
information services, issuing in 2007 a declaratory ruling finding: (1) “that wireless
broadband Internet access service is an information service”; (2) that while the
underlying transmission component of such service is “telecommunications,” offering
telecommunications transmission “as a part of a functionally integrated Internet access
service is not ‘telecommunications service’ under section 3 of the Act”; and (3) “that
206
Id.
207
In re United Power Line Council’s Petition for Declaratory Ruling Regarding the Classification of
Broadband Over Power Line Internet Access Serv. as an Info. Serv., 21 FCC Rcd 13281 (2006)
(memorandum opinion and order).
208
See In re Regulatory & Policy Problems Presented by the Interdependence of Computer & Commun.
Servs. & Facilities, 28 F.C.C.2d 267 (1971) (final decision and order) (“Computer I”); In re Amendment of
Section 64.702 of the Comm’n’s Rules & Regulations (Second Computer Inquiry), 77 F.C.C.2d 384 (1980)
(final decision) (“Computer II”); In re Computer III Further Remand Proceedings: Bell Operating Co.
Provision of Enhanced Servs., 14 FCC Rcd 4289 (1999) (report and order). Collectively, these matters are
known as the “Computer Inquiry Rules.”
209
See Press Release, FCC, Verizon Telephone Companies’ Petition for Forbearance from Title II and
Computer Inquiry Rules with Respect to Their Broadband Services Is Granted by Operation of Law (Mar.
20, 2006), available at http://hraunfoss.fcc.gov/edocs_public/attachmatch/DOC-264436A1.pdf
(explaining
that a forbearance petition will be deemed granted if the FCC does not deny the petition within one year of
receipt, unless one-year period is extended by the FCC). Although the FCC did not explicitly grant such
relief, “the effect given to the petition by operation of law grants Verizon’s further broadband relief,
continuing our policy to encourage new investment.” In re Petition of the Verizon Tel. Cos. for
Forbearance under 47 U.S.C. § 160(c) from Title II & Computer Inquiry Rules with Respect to Their
Broadband Servs., WC Docket 04-440 (2006), 2006 FCC LEXIS 1333 (Chairman Martin & Comm’r Tate,
concurring).
210
Such services included: (1) packet-switched services capable of 200 Kbps in each direction and (2)
certain optical networking, hubbing, and transmission services. See In re Petition of the Verizon Tel. Cos.
for Forbearance under 47 U.S.C. § 160(c) from Title II & Computer Inquiry Rules with Respect to Their
Broadband Servs., WC Docket 04-440 (Feb. 7, 2006) (ex parte letter from Verizon Tel. Cos.), available at
http://gullfoss2.fcc.gov/prod/ecfs/retrieve.cgi?native_or_pdf=pdf&id_document=6518324844
.
47
mobile wireless broadband Internet access service is not a ‘commercial mobile service
under section 332 of the Act.”
211
Thus, over the past few years, the FCC has essentially unified the regulatory
status of cable, wireline, powerline, and wireless broadband Internet access services as
information services that are not subject to Title II common carrier requirements.
212
In
doing so, the FCC has focused on the abstract functional properties of ISPs as they
ranged across varying implementations or platforms. Underlying this unification has
been a significant degree of deregulation across broadband technologies, in keeping with
the statutory interest under the Communications Act in furthering competition and the
development of new technologies.
213
The FCC has nonetheless continued to demonstrate an interest in, and
commitment to, broadband Internet access. Certain policy statements have sought to
guide industry conduct to avoid both FCC enforcement actions and the “potentially
destructive” impact of overbroad and premature regulation of an “emerging market.”
214
In 2004, then-FCC Chairman Michael Powell challenged the industry to preserve four
“Internet Freedoms” to that end. They were:
(1) The “Freedom to Access Content . . . consumers should have access to their
choice of legal content” (within “reasonable limits” imposed by legitimate
network management needs);
211
In re Appropriate Regulatory Treatment for Broadband Access to the Internet Over Wireless Networks,
22 FCC Rcd 5901, 5901-02 (2007) (declaratory ruling).
212
See id. (“This approach is consistent with the framework that the Commission established for cable
modem Internet access service, wireline broadband Internet access service, and Broadband over Power
Line (BPL) – enabled Internet access service and it establishes a minimal regulatory environment for
wireless broadband Internet access service that promotes our goal of ubiquitous availability of broadband to
all Americans.”) (citations omitted).
213
See, e.g., Assessing the Communications Marketplace: A View from the FCC: Hearing Before the S.
Comm. on Commerce, Sci., & Transp., 110th Cong. 2 (2007) (statement of Kevin J. Martin, Chairman,
FCC), available at http://hraunfoss.fcc.gov/edocs_public/attachmatch/DOC-270192A1.pdf
(“In 2005, the
Commission created a deregulatory environment that fueled private sector investment. . . . Broadband
deployment has been our top priority at the Commission, and we have begun to see some success as a result
of our efforts.”); see also, e.g., Thorne, Tr. II at 34 (“Over the past ten years, the policy of Congress and the
Federal Communications Commission has been to encourage investment and innovation in broadband
networks. This policy has been wildly successful.”). In addition, the FCC had undertaken to expand the
supply of broadband access services by, for example, promoting the use of unlicensed spectrum in rural
areas. See In re Implementation of the Commercial Spectrum Enhancement Act & Modernization of the
Comm’n’s Competitive Bidding Rules & Procedures, 20 FCC Rcd 11268 (2005) (declaratory ruling and
notice of proposed rulemaking) (implementing Enhance 911 Services Act, Pub. L. No. 108-494, 118 Stat.
3986, Title II (2004)). See infra Chapter VI.D for a more detailed discussion of federal spectrum policies.
214
Michael K. Powell, Chairman, FCC, Keynote Address at the Silicon Flatirons Symposium: Preserving
Internet Freedom: Guiding Principles for the Industry (Feb. 8, 2004), available at
http://hraunfoss.fcc.gov/edocs_public/attachmatch/DOC-243556A1.pdf
.
48
(2) The “Freedom to Use Applications . . . consumers should be able to run the
applications of their choice” (within service plan limits and provided the
applications do not “harm the provider’s network”);
(3) TheFreedom to Attach Personal Devices . . . consumers should be permitted
to attach any devices they choose to the connection in their homes” (within
service plan limits, provided the devices do not “harm the provider’s network
or enable theft of service”); and
(4) The Freedom to Obtain Service Plan Information . . . consumers should
receive meaningful information regarding their service plans” (so that
“broadband consumers can easily obtain the information they need to make
rational choices.”).
215
With some modification, those four Internet Freedoms were incorporated into an
FCC policy statement (“Broadband Policy Statement”), issued to accompany the
Wireline Order in 2005.
216
Recast as FCC principles, they included:
(1) The ability of consumers to “access the lawful Internet content of their
choice”;
(2) the ability of consumers to “run applications and use services of their choice,
subject to the needs of law enforcement”;
(3) the ability of consumers to “connect their choice of legal devices that do not
harm the network”; and
(4) the existence of “competition among network providers, application and
service providers, and content providers.”
217
In approving the AT&T/SBC and Verizon/MCI mergers in 2005, the FCC
required the companies to adhere to connectivity principles set forth in its Broadband
Policy Statement for a period of two years.
218
More recently, in approving the
215
Id. (italics included in published version of address).
216
See In re Appropriate Framework for Broadband Access to the Internet over Wireline Facilities, 20 FCC
Rcd 14986 (2005) (policy statement).
217
Id. Also in 2005 – prior to issuance of the Wireline Order – the FCC took enforcement action against
allegedly discriminatory behavior by an ISP. In re Madison River Communs., LLC, 20 FCC Rcd 4295,
4297 (2005). The resulting consent decree in that matter required a small North Carolina ISP to “not block
ports used for VoIP applications or otherwise prevent customers from using VoIP applications.” Id.
Because the FCC used its Title II authority in this case, under which it can regulate common carrier
services, this case may not be precedent for future enforcement authority over such services now
characterized as information services and regulated under the FCC’s Title I ancillary jurisdiction. See also
infra Chapters VII.B and IX.B for additional discussion of the Madison River matter.
218
See In re SBC Communs. Inc. & AT&T Corp. Applications for Approval of Transfer of Control, 20
FCC Rcd 18290 (2005) (memorandum opinion and order) (especially appendix F); In re Verizon
Communs. Inc. & MCI Inc. Applications for Approval of Transfer of Control, 20 FCC Rcd 18433 (2005)
(memorandum opinion and order) (especially appendix G).
The DOJ also examined the proposed mergers and successfully sought, under the Tunney Act, the
divestiture of certain assets as conditions to such mergers. See United States v. SBC Communs., Inc., Civ.
49
AT&T/BellSouth merger, the FCC required the combined company to agree not to
provide or sell (for a period of thirty months following the merger closing date) “any
service that privileges, degrades, or prioritizes any packet transmitted over
AT&T/BellSouth’s wireline broadband Internet access services based on its source,
ownership, or destination.”
219
Most recently, the FCC announced an inquiry “to better understand the behavior
of participants in the market for broadband services.”
220
Among other things, the FCC is
seeking information regarding the following:
How broadband providers are managing Internet traffic on their
networks today;
Whether providers charge different prices for different speeds or
capacities of service;
Whether our policies should distinguish between content providers
that charge end users for access to content and those that do not;
and
How consumers are affected by these practices.
221
In addition, the FCC has asked for comments “on whether the [Broadband] Policy
Statement should incorporate a new principle of nondiscrimination and, if so, how would
‘nondiscrimination’ be defined, and how would such a principle read.”
222
Action Nos. 05-2102 (EGS) & 05-2103 (EGS), 2007 WL 1020746 (D.D.C. Mar. 29, 2007). In particular,
the merging parties were required to divest themselves of long-term interests in certain local private line, or
special access, facilities. Id. at *5 (noting that “[a]part from the difference in geographic scope due to the
identities of the parties, the proposed final judgments are practically identical and require the same type of
divestitures.”). See infra Chapter VI.B for a discussion of special access facilities and their relationship
with broadband Internet services.
219
In re AT&T Inc. & BellSouth Corp. Application for Transfer of Control, 22 FCC Rcd 5662 (2006)
(memorandum opinion and order). Two FCC Commissioners issued a concurring statement expressing
their view that “[t]he conditions regarding net-neutrality have very little to do with the merger at hand and
very well may cause greater problems than the speculative problems they seek to address.” Id. at 5826
(Chairman Martin & Comm’r Tate, concurring).
The DOJ also reviewed the AT&T/BellSouth merger, examining, among other things, the merged
firm’s ability or incentive to favor its own Internet content over that of its rivals. See Press Release, DOJ,
Statement by Assistant Attorney General Thomas O. Barnett Regarding the Closing of the Investigation of
AT&T’s Acquisition of BellSouth 3 (Oct. 11, 2006), available at
http://www.usdoj.gov/atr/public/press_releases/2006/218904.pdf
. The DOJ concluded its investigation last
October, finding that “the merger would neither significantly increase concentration in markets for the
provision of broadband services to end users nor increase Internet backbone market shares significantly.”
Id.
220
Press Release, FCC, FCC Launches Inquiry into Broadband Market Practices (Mar. 22, 2007), available
at http://hraunfoss.fcc.gov/edocs_public/attachmatch/DOC-271687A1.pdf.
221
Id.
50
* * *
The legal and regulatory developments discussed above have prompted the
current debate over network neutrality regulation. In the next Chapter, we provide an
overview of the arguments in favor and against such regulation that have been put forth
to date.
222
Id.
51
III. OVERVIEW OF ARGUMENTS IN FAVOR OF AND AGAINST
NETWORK NEUTRALITY REGULATION
Technology experts have recognized since the Internet’s earliest days that
network resources are scarce and that traffic congestion may lead to reduced
performance.
223
Although such experts continued to explore different data-transmission
protocols and the viability of market-based pricing mechanisms through the 1980s and
1990s, the current debate over broadband connectivity policy did not accelerate until
more recently.
224
At about the same time that the FCC began its cable broadband
rulemaking proceedings in 2000,
225
data routing technologies advanced to the point
where some network operators began openly to consider using prioritization and other
active management practices to improve network management and provide certain
premium services for a fee.
226
Various interested parties, including some content and applications providers,
non-facilities-based providers of Internet services, and third-party commentators, have
expressed concern about network operators’ use of these routing technologies in an
environment that is not subject to common carrier regulation. Some of them, therefore,
have proposed that the transmission of data on the Internet be subject to some type of
“network neutrality” rules that forbid or place restraints on some types of data or price
discrimination by network operators.
227
This Chapter summarizes the major arguments in
favor of (in Section A) and against (in Section B) the enactment of some form of network
neutrality regulation put forth to date.
228
Arguments involving data discrimination and
prioritization, as well as competition and consumer protection issues, are addressed in
more detail below in Chapters IV through VIII of this Report.
223
See supra Chapter I.A.
224
See generally Vinton G. Cerf & David Farber, The Great Debate: What is Net Neutrality?, Hosted by
the Center for American Progress (July 17, 2006), available at
http://www.americanprogress.org/kf/060717%20net%20neutrality.pdf
; Tim Wu & Christopher Yoo,
Keeping the Internet Neutral?: Timothy Wu and Christopher Yoo Debate (Vand. Pub. Law, Research Paper
No. 0-27, 2006), available at http://ssrn.com/abstract=953989
.
225
See supra Chapter II.C for a discussion of relevant FCC proceedings.
226
See supra Chapter I.A.
227
See, e.g., Tim Wu, Network Neutrality, Broadband Discrimination, 2 J. ON TELECOMM. & HIGH TECH.
L. 141, 151 (2005) (“Over the history of communications regulation, the Government has employed both
common carriage requirements (similar to the neutrality regime discussed here) and limits on vertical
integration as [a] means of preventing unwanted discrimination.”). See also Cohen, Tr. II at 195 (arguing
that network neutrality regulation “is really a return to the status quo as where it was [in August 2005 and
before Brand X] so it’s not . . . a new set of regulations”).
228
This Chapter is not intended to be a comprehensive treatment of the many arguments put forth in favor
of and against network neutrality. Instead, this Chapter serves as a general survey of the types of
arguments raised by both sides of the network neutrality debate. Nor does this Chapter attribute every
single argument or variation thereon to every individual or entity that has made such arguments.
52
A. Arguments in Favor of Network Neutrality Regulation
Proponents of network neutrality regulation argue, among other things, that the
existing jurisdiction of the FCC, FTC, and DOJ, as well as oversight by Congress, are
insufficient to deal with what they predict will be inevitable and far-reaching harms from
so-called non-neutral practices. They suggest that after recent legal and regulatory
determinations, providers of certain broadband Internet services now have the legal
authority to act as gatekeepers of content and applications on their networks.
Principally, these advocates express concern about: (1) blockage, degradation,
and prioritization of content and applications; (2) vertical integration by network
operators into content and applications; (3) effects on innovation at the “edges” of the
network (i.e., by content and applications providers); (4) lack of competition in “last-
mile” broadband services; (5) legal and regulatory uncertainty in the area of Internet
access; and (6) diminution of political and other expression on the Internet. Net
neutrality proponents argue that various harms are likely to occur in the absence of
neutrality regulation and that it will be difficult or impossible to return to the status quo if
non-neutral practices are allowed to become commonplace. Proponents thus see an
immediate need to enact neutrality regulation.
229
1. Concerns about Blockage and Degradation of Non-Favored Content
and Applications
Network neutrality advocates suggest that, without neutrality rules, network
operators will use packet-inspection technologies to favor the transmission of their own
content and applications, or those of their affiliates, over those of other providers instead
of offering the unrestricted access generally available to end users today.
230
They
frequently suggest that end users’ access to the wider Internet will become balkanized
and restricted to what network operators choose to display in their own proprietary
“walled gardens.” Proponents believe such walled gardens will look more like the
original America Online dial-up service or even an Internet version of cable television,
with access to only a limited number of favored sites. Proponents further point to
preferential practices in other industries, such as cable television and telephony, as
indications of the likelihood that network operators will adopt comparable practices in the
absence of net neutrality regulation.
231
229
See, e.g., Cohen, Tr. II at 150 (“I can’t take the view that we should start from the premise of wait until
it’s all destroyed before we do anything about it.”).
230
See, e.g., Wu, supra note 227. See also EARL W. COMSTOCK, WHAT IS NET NEUTRALITY? (2006),
available at http://www.comptel.org/content.asp?contentid=658
; G. Sohn, Tr. I at 98; Farrell, Tr. I at 220.
231
See, e.g., Lawrence Lessig & Robert W. McChesney, No Tolls on the Internet, WASH. POST, June 8,
2006, at A23. Lessig and McChesney suggest that “[w]ithout net neutrality, the Internet would start to look
like cable TV. A handful of massive companies would control access and distribution of content, deciding
what you get to see and how much it costs.” Id. See also Tulipane, Tr. I at 259-66. In Tulipane’s view,
“prioritization based on source or content will result in a closed network, just like the cable system today.”
Id. at 266. Similarly, Sohn suggests: “[s]hort of outright blocking, ISPs could engage in various forms of
53
Advocates of net neutrality point to certain statements by ISP executives as
evidence of their intent to treat some content and applications differently than others.
232
They cite to the Madison River
233
matter as evidence that network operators do, in fact,
have the technological means and incentive to actively degrade or outright block certain
content and applications.
234
They also question whether end users will be able to
determine readily why certain content and applications might be unavailable or executing
more slowly or less reliably than others.
235
Some also suggest that the introduction of
specialized, virtual private networks (“VPNs”) that require users to purchase premium
service packages foreshadows the advent of a balkanized, non-neutral Internet.
236
In particular, these proponents warn that network operators might try to disfavor
some content and applications by inhibiting or forbidding users from attaching related
devices to their networks, such as the VoIP phone equipment of competing Internet
telephony providers or VoIP-enabled mobile phones.
237
They also state that cable
companies have, in fact, blocked streaming video applications to protect their own cable
television businesses and that wireless phone companies have placed limits on the types
of content and applications that can be accessed using their wireless Internet services.
238
Some network neutrality proponents also contend that network operator bans on
the use of basic residential packages to operate VPNs, open-access Wi-Fi antennas that
support multiple users, home networks, and computer servers all amount to violations of
neutrality principles.
239
Some, but not all, proponents, however, believe that such
discrimination, and the fears [sic] that could have the practical effect of driving innovators to really have
now a practical need to seek deals with each recipient’s ISP.” D. Sohn, Tr. II at 227-28.
232
See supra note 64.
233
In Madison River, an ISP allegedly blocked its customers from accessing a competing VoIP provider.
The ISP entered into a consent decree with the FCC that prohibited the ISP from blocking ports used for
VoIP traffic. The ISP also made a voluntary payment of $15,000 to the U.S. Treasury. In re Madison
River Communs., LLC, 20 F.C.C.R. 4295, 4297 (2005).
234
See, e.g., Davidson, Tr. I at 227-28. For Davidson, “prioritization in the last mile creates real concerns.
Particularly, we are concerned that prioritization through router-based discrimination in the last mile
degrades computing services, and creates incentives to relegate some of those computing services to a slow
lane.” Id.
235
See supra Chapter I.C.5.
236
See, e.g., Yokubaitis, Tr. II at 108.
237
See, e.g., Libertelli, Tr. I at 73 (“[F]or Skype, network neutrality is about protecting our users’ ability to
connect to each other, whenever and wherever they want. We support net neutrality[] because it embodies
a policy of decentralized innovation.”).
238
See, e.g., John Windhausen, Jr., Good Fences Make Bad Broadband: Preserving an Open Internet
Through Net Neutrality 16-23 (Public Knowledge White Paper, 2006), available at
http://www.publicknowledge.org/pdf/pk-net-neutrality-whitep-20060206.pdf
.
239
See, e.g., id.
54
restrictions may be justified because they are meant to solve situations in which a few
users generate costs that are imposed on other users.
240
2. Concerns about Charging Content and Applications Providers for
Prioritized Data Delivery
Net neutrality advocates also express concern that, short of outright blockage or
active degradation, network operators will present certain content and applications to
users in a preferential manner in exchange for payment. They express concern that
network operators may, for example, use packet-inspection technology to provide quicker
load times for certain providers’ Web pages or faster and more consistent connections for
favored VoIP or streaming video providers.
241
Some network operators have, in fact,
indicated that they would like to offer certain prioritized services or other kinds of
quality-of-service guarantees in exchange for a premium fee.
242
Some neutrality advocates object to the idea of a network offering prioritized data
transmission or quality-of-service guarantees in exchange for payment.
243
That is, they
object to a deviation from the long-standing first-in-first-out and best-efforts transmission
characteristics of the Internet. They are concerned about the potential for prioritization to
result in blocking or degradation of non-favored content and applications. These
advocates are concerned that content and applications from providers affiliated with the
network operator or having a greater ability to pay will be available in a “fast lane,” while
others will be relegated to a “slow lane,” discriminated against, or excluded altogether.
244
Further, creating priority fast lanes, according to some advocates, necessarily would
240
See, e.g., Wu, supra note 227, at 152.
241
See, e.g., Editorial, Open Net, THE NEW REPUBLIC, June 26, 2006, available at
http://www.tnr.com/doc.mhtml?pt=oy4NRC5%2Bfnu%2Fm585FtGwlC%3D%3D
.
242
See infra Chapter III.B.
243
See, e.g., Davidson, Tr. I at 228. In his view:
[W]hat we’re worried about is in that context, the power to prioritize in the last mile
effectively becomes the power to control the applications and content that customers can
effectively use.
So, imagine, for example, that a last mile provider with market power might be
able to use prioritization to, for example, relegate a competing Voice over IP provider to
a lower quality slow lane. It might prevent a competing video provider – prevent a
competing video service from accessing a higher tier of priority necessary to provide
good service, and preference its own services instead.
Id. See also Tulipane, Tr. I at 259-66.
244
See, e.g., Davidson, Tr. I at 229-30. According to Davidson, “[w]e are concerned about creating a fast
lane tier of traffic that is susceptible of exclusive dealings.” Id. at 229. In his view, “prioritization that
provides an incentive to create slow lanes so that you can charge people for the fast lanes is something that
we think is problematic.” Id. at 230.
55
result in (intentionally or effectively) degraded service in the remainder of the network.
245
Likewise, some advocates object to the creation of private networks that might provide
prioritized data transmission or other forms of quality of service to only a limited number
of customers, arguing that this will represent the “end” of the Internet as we know it.
246
Some advocates, therefore, argue that content and applications providers should
not be allowed to pay a premium fee for prioritized data transmission, even if they want
to do so. They object, for example, to a possible two-sided market model where content
and applications providers pay networks for prioritization in the same way that merchants
subsidize the purchase price of a newspaper by paying for the placement of
advertisements in return for greater consumer exposure to their advertisements.
247
Instead, in this view, networks should be required to derive revenues principally from
providing Internet access to residential and business customers.
248
Some advocates who
object to prioritized data transmission would, however, allow network operators to charge
end users more for the consumption of larger amounts of bandwidth.
249
Other advocates do not strictly object to prioritization or quality of service for a
fee.
250
They argue, however, that different levels of prioritization should be offered on
uniform terms to all “similar” content and applications providers and that all end users be
245
See, e.g., id. at 228-30 (“[P]rioritization . . . in the last mile degrades competing services, and creates
incentives to relegate some of those competing services to a slow lane . . . [given] that the only way that
you can have a fast lane that you can charge for, that is useful, is if there are also slow lanes that are less
useful, and less attractive.”).
246
See, e.g., Lessig & McChesney, supra note 231. Lessig and McChesney predict that, without neutrality
rules, network operators will use data prioritization “to sell access to the express lane to deep-pocketed
corporations and relegate everyone else to the digital equivalent of a winding dirt road.” In their view,
“[n]et neutrality means simply that all like Internet content must be treated alike and moves at the same
speed over the network.” Id.
247
See Pepper, Tr. I at 87 (“The last set of questions on net neutrality concern who can be charged for what
service on broadband connections. Should the Internet access be funded solely by consumers, or can the
cost be shared with content providers and application providers?”).
248
See, e.g., Editorial, supra note 241 (“Net neutrality would prohibit all of this. Telecoms could make
money they way they always have – by charging homes and businesses for an Internet connection – but
they couldn’t make money from the content providers themselves.”). See also Sidak, Tr. I at 107 (“In other
words, they don’t have a problem with network operators and end users contracting for prioritized delivery.
The problem they have is . . . with suppliers of content.”).
249
See, e.g., Davidson, Tr. I at 228 (“Not all network management is anti-competitive prioritization. And
there are a lot of things I think many of us agree that are not problematic in this context. So, charging end
users, whether it’s businesses or consumers, more for more bandwidth, not a problem here.”). See also
C
OMSTOCK, supra note 230.
250
See, e.g., D. Sohn, Tr. II at 230. In Sohn’s view, network neutrality regulation “wouldn’t need to
involve a complete ban on all prioritization, even on the Internet part. I think in particular, an ISP should
be free to offer prioritization capability that enables subscribers to choose what services to use it with.” Id.
See also Cohen, Tr. II at 150 (“There are and should remain many networks on which network providers
are free to discriminate based on the source, ownership or destination of data . . . .”).
56
guaranteed a minimum level of access to the entire universe of Internet content.
251
Another advocate suggests that network operators should be free to create specialized
service parameters and to provide prioritized data transmission, but with a requirement
that networks also maintain a basic level of best-efforts Internet service.
252
Some network neutrality proponents further suggest that, as the speed of the
Internet continues to increase with the deployment of faster technologies like fiber-optic
wirelines and improved wireless transmissions, the issue of prioritization may become
irrelevant.
253
They suggest that when Internet speeds of upwards of 100 megabits per
second (“Mbps”) are widely available, first-in-first-out and best-efforts delivery at these
rates should be sufficient to transmit all Internet traffic without any problems, even for
advanced and time-sensitive applications. These proponents suggest that all congestion
and bandwidth scarcity issues will effectively disappear at these speeds and the issue of
prioritization will eventually be moot. A neutrality regime, therefore, can be seen as a
temporary remedy for a problem that ultimately will be outgrown and an important
measure that will prevent network operators from creating artificial scarcity in their
networks in the meantime to derive additional revenues by charging content and
applications providers for new types of data transmission.
254
Thus, some of these
251
See, e.g., Wilkie, Tr. I at 170 (“The caveat might be that you might want to add that tiering and offering
higher levels of prioritization are allowable, but they would have to be offered on a non-discriminatory
basis, or what economists call ‘second degree price discrimination,’ that is, the prices are functions of the
level of functionality offered, not the identity of the customer.”). See also G. Sohn, Tr. I at 128 (advocating
that if one content or applications provider negotiates a particular service arrangement with a network
operator, a second competing content or applications provider should “absolutely” be provided with an
identical arrangement by the operator without having to engage in separate negotiations).
252
See, e.g., Press Release, USC Annenberg Center, Annenberg Center Releases Principles for Network
Neutrality (2006), available at http://www.annenberg.edu/news/news.php?id=13. See also D. Sohn, Tr. II
at 226 (suggesting that the optimum outcome is “to keep this neutral open Internet at an acceptable level of
service, to keep that in existence even as experimentation with other networks . . . proceeds”).
253
See, e.g., Network Neutrality: Competition, Innovation, and Nondiscriminatory Access: Hearing Before
the S. Comm. on Commerce, Sci., & Transp., 109th Cong. (2006) (testimony of Gary R. Bachula, Vice
President, Internet2) [hereinafter Bachula Senate Testimony], available at
http://commerce.senate.gov/pdf/bachula-020706.pdf
; Bachula, Tr. II at 164-74. See also Davidson, Tr. I at
231 (“In most cases, the best way to deal with any concerns about prioritization is to provide better
broadband, higher bandwidth offerings to consumers.”).
254
According to Bachula:
When we first began to deploy our Internet2 network some eight years ago, our
engineers started with the assumption that we would have to find technical ways of
prioritizing certain bits, such as streaming video or video conferencing, in order to ensure
that they arrived without delay.
For a number of years, we seriously explored various quality of service
techniques, conducted a number of workshops and even convened an ongoing quality of
service working group, but as it developed, all of our research and practical experience
supported the conclusion that it was far more cost effective to simply provide more
bandwidth. It was cheaper to provide more bandwidth than to install these sophisticated
quality of service prioritization techniques.
57
proponents believe that, instead of allowing network operators to engage in prioritization,
policy makers should focus on creating incentives for the deployment of next-generation,
high-speed networks.
255
3. Concerns about Vertical Integration
Net neutrality proponents also express concern about the prospect of network
operators integrating vertically into the provision of content and applications. Proponents
argue that network operators now have the legal and technological ability to control both
their own physical networks and the ability of content and applications providers to reach
end users. Proponents further suggest that vertically integrated network operators will
favor their own content and applications, or those of their affiliates, over others.
256
Some
of these proponents, therefore, argue that network operators’ ability to vertically integrate
should be legally restricted or forbidden altogether.
257
4. Concerns about Innovation at the “Edges” of the Internet
Proponents suggest that if so-called non-neutral practices are allowed to flourish
in the core of the networks that comprise the Internet, innovation by content and
applications developers that are connected to the Internet’s “edges” will suffer. Some
proponents, for example, are concerned about the complexity and cost that content and
applications providers would experience if they had to negotiate deals with numerous
network operators worldwide. They suggest that content and applications providers will
need to expend considerable resources to negotiate and enter into prioritization
agreements or other preferential arrangements with numerous networks and that many
(particularly, small) companies will not be able to pay the fees that operators will demand
to reach end users in a competitive manner.
258
Thus, they fear that innovators will be
With enough bandwidth in the network, there is no congestion, and video bits do
not need preferential treatment. All the bits arrive fast enough even if intermingled.
Bachula, Tr. II at 169.
255
Robert D. Atkinson & Philip J. Weiser, A “Third Way” on Network Neutrality, 13 THE NEW ATLANTIS
47, 58-59 (2006), available at http://www.thenewatlantis.com/archive/13/TNA13-AtkinsonWeiser.pdf
.
These commentators suggest that Congress should allow companies investing in broadband networks to
expense new broadband investments in the first year and also extend the moratorium on federal, state, and
local broadband-specific taxes, but make it contingent upon provision of an open, best-efforts level of
Internet service. Id. See also generally Lehr, Tr. I at 36 (“[Over time, network] penetration saturates. And
so, revenues growth slows. And the question is that if we want the industry to continue to meet the growth
in traffic, we have to figure [out] what the incentives are.”).
256
See, e.g., Joseph Farrell, Open Access Arguments: Why Confidence is Misplaced, in NET NEUTRALITY
OR
NET NEUTERING: SHOULD BROADBAND SERVICES BE REGULATED?, supra note 42, at 195.
257
See, e.g., Christian Hogendorn, Regulating Vertical Integration in Broadband: Open Access Versus
Common Carriage, 4 R
EV. NETWORK ECON. 19, 30 (2005).
258
See, e.g., Davidson, Tr. I at 224-33. According to Davidson, “[a]s our founders have said, two graduate
students in a dorm room with a good idea would not have been able to create this service if the first thing
that they had to do was to hire an army of lawyers and try to reach carriage agreements with providers all
58
blocked, actively degraded, or provided with low-priority data transmissions, and the
development of the next revolutionary Internet site or application may be inhibited. They
predict that spontaneous innovation will be precluded or forced to proceed through
established businesses already having significant capital and favored relationships with
network operators.
259
Similarly, net neutrality proponents sometimes argue that non-
profit and educational entities may be at a disadvantage relative to highly capitalized
businesses.
260
5. Concerns about “Last-Mile” Competition in Broadband Service
Net neutrality proponents typically argue that a cable-telephone duopoly exists in
most markets for last-mile broadband connections and that competition from only two
broadband providers is not sufficient to check the harms that they envision. Net
neutrality proponents generally do not believe that one of these competitors will provide
users with an acceptable, alternative open service if the other decides to pursue exclusive
deals or data prioritization. Proponents also typically express doubt about the potential of
newer technologies like wireless Internet and broadband over powerlines to provide in
the near future a robust, competitive alternative to the access offered by the cable and
telephone companies.
261
A related concern expressed by some network neutrality proponents is that last-
mile ISPs might not disclose to end users the ISPs’ differential treatment of certain data
and that they will be able to get away with such non-disclosure due to a lack of viable
competitive alternatives in the marketplace or the difficulty of tracing problems to ISPs’
practices. Proponents also suggest that, to the extent that such disclosures are made by
ISPs, many end users will not be able to readily understand them, making such
around the world.” Id. at 226. See also Cohen, Tr. II at 152 (“[Historically, Internet start-ups] did not have
to negotiate. They did not have to persuade or cajole network providers for special treatment.”); Center for
Creative Voices in Media, Public Comment 6, at 2 (“Artists must have the freedom to distribute their works
over the broadband Internet, and the American public must have the freedom to choose from among those
works, rather than have the cable and telephone broadband providers who overwhelmingly control the
market for broadband deny those freedoms and make those choices for them.”).
259
See, e.g., Mark. A. Lemley & Lawrence Lessig, The End of End-to-End: Preserving the Architecture of
the Internet in the Broadband Era, 48 UCLA L. R
EV. 925 (2001). Lemley and Lessig suggest that, “[i]f
that strategic actor owns the transmission lines itself, it has the power to decide what can and cannot be
done on the Internet. The result is effectively to centralize Internet innovation within that company and its
licensees.” Id. at 932. See also Farrell, Tr. I at 154 (“[T]here is a concern if you allow last mile providers
to make charges on content providers, there is a concern about possible expropriation of successful content
providers.”).
260
See, e.g., Reconsidering Our Communications Laws: Ensuring Competition and Innovation: Hearing
Before the S. Comm. on the Judiciary, 109th Cong. (2006) (statement of Jeff C. Kuhns, Senior Director,
Consulting and Support Services, Information Technology Services, The Pennsylvania State University),
available at http://judiciary.senate.gov/testimony.cfm?id=1937&wit_id=5418
.
261
See, e.g., Feld, Tr. II at 18-19; Putala, Tr. II at 29 (“The much heralded independent alternatives are still
tiny.”); Wu, Tr. II at 255 (“I have been hearing that for ten years. I’ve never met anyone who has a
connection, broadband over power line, and it has been used a million times . . . .”).
59
disclosures ineffective in checking potential ISP misconduct.
262
Some network neutrality
proponents also argue that the use of data packet inspection and other traffic analysis
technologies by network operators may give rise to privacy concerns that end users might
not readily recognize.
263
6. Concerns about Legal and Regulatory Uncertainty
Net neutrality advocates suggest that the FCC’s recently issued broadband
principles, its ancillary jurisdiction over broadband providers under Title I of the
Communications Act of 1934, and the antitrust laws are insufficient to prevent or police
potentially harmful conduct by broadband providers.
264
In particular, they argue that the
FCC’s broadband principles are not legally enforceable, that the full scope of its Title I
authority has yet to be determined, and that any remedial action is likely to result in years
of litigation and appeals, leaving the status of the Internet in doubt.
265
Neutrality
advocates argue that more concrete examples of alleged harms, beyond Madison River,
do not exist primarily because network operators have been on their best behavior in the
short time since recent legal and regulatory determinations were handed down, to avoid
attracting further scrutiny. Proponents argue that without further regulation, however,
network operators will likely engage in such practices in the future and that there will be
no practical way to prevent or remedy the resulting harms without a comprehensive, ex
ante regulatory regime.
266
7. Concerns about Political and Other Expression on the Internet
Advocates suggest that, without a network neutrality rule, operators will likely
engage in practices that will reduce the variety and quality of content available to users,
generally. In particular, they suggest network operators may degrade or block content
that they find to be politically or otherwise objectionable or contrary to their own
262
See, e.g., Kenney, Tr. II at 103 (“I think these disclosure issues are important, but I don’t think that’s the
issue here today. In fact, the elephant in the room is whether or not disclosure of prioritization practices is
sufficient to remedy the harm.”).
263
See, e.g., id. (“I don’t think anyone has a full understanding of what sort of security and vulnerability
issues are at stake with deep packet inspection technologies.”).
264
See, e.g., Libertelli, Tr. I at 117 (“[W]e’re talking about a policy statement [(the FCC principles)]; we’re
not necessarily talking about a binding rule of decision.”); Farrell, Tr. I at 159 (“I am not convinced that
anti-trust, as currently enforced, is going to do a good job on those potential problems.”).
265
See, e.g., Network Neutrality: Competition, Innovation, and Nondiscriminatory Access: Hearing Before
the H. Comm. on the Judiciary, Task Force on Telecom & Antitrust, 109th Cong. 23, 35 (2006) (prepared
statement of Earl W. Comstock, President and CEO, COMPTEL) [hereinafter Comstock House
Testimony], available at http://judiciary.house.gov/media/pdfs/printers/109th/27225.pdf
.
266
See, e.g., Misener, Tr. II at 142 (“[W]e really believe that it would be in consumers and industry’s best
interest for certainty and for a national policy to be set by the Federal Government at the very highest level
. . . .”).
60
business interests.
267
Neutrality advocates suggest that other types of speech, such as
individuals’ Web logs, may also be disfavored or blocked as the incidental result of an
operator’s more general decisions about favoring certain content providers over others.
268
This argument appears to be a variation on the suggestion that, without a neutrality
regime, innovation (or, in this case, speech) at the edges of the network will be
inhibited.
269
B. Arguments against Network Neutrality Regulation
Opponents of network neutrality regulation include facilities-based wireline and
wireless network operators, certain hardware providers, and other commentators. These
parties maintain that imposing network neutrality regulation will impede investment in
upgrading Internet access and may actually hamper innovation. They also argue that,
apart from the Madison River case, the harms projected by net neutrality proponents are
merely hypothetical and do not merit a new, ex ante regulatory regime.
Principally, these opponents argue that: (1) the Internet is not neutral and never
truly has been, and a neutrality rule would effectively set in stone the status quo and
preclude further technical innovation; (2) effective network management practices
require some data to be prioritized and may also require certain content, applications, and
attached devices to be blocked altogether; (3) there are efficiencies and consumer benefits
from data prioritization; (4) new content and applications also require this kind of
network intelligence; (5) network operators should be allowed to innovate freely and
differentiate their networks as a form of competition that will lead to enhanced service
offerings for content and applications providers and other end users; (6) prohibiting
network operators from charging different prices for prioritized delivery and other types
of quality-of-service assurances will reduce incentives for network investment generally
267
See, e.g., Bill D. Herman, Opening Bottlenecks: On Behalf of Mandated Network Neutrality, 59 FED.
COMM. L.J. 107, 118 (2007) (submitted to FTC as Public Comment 26) (“A broadband provider should no
more be able to stop a customer’s email or blog post due to its political content than a telephone company
should be permitted to dictate the content of customers’ conversations.”). See also Peha, Tr. I at 26 (“There
could also be content filtering for other reasons. Perhaps for political reasons I will want to limit access to
advocacy groups for issues I oppose, or candidates I oppose.”).
268
See, e.g., Barbara A. Cherry, Misusing Network Neutrality to Eliminate Common Carriage Threatens
Free Speech and the Postal System, 33 N.
KY. L. REV. 483, 507 (2006) (submitted to FTC as Public
Comment 8) (“If antitrust principles are insufficient to substitute for the functions that common carriage
and public utility obligations have served in providing access, then free speech rights of individuals will be
sacrificed to serve economic interests of corporate owners of broadband facilities.”); Feld, Tr. II at 15
(“Goal number . . . two is the Internet is open and diverse as it exists today or better. . . . The First
Amendment cares about this stuff. Our democracy depends on this stuff, and Congress has told us to
protect it as part of the policy. Any policy that doesn’t protect that, even if it is more economically
efficient, is a failed policy.”). But compare Thomas B. Leary, The Significance of Variety in Antitrust
Analysis, 68 A
NTITRUST L.J. 1007, 1019 (2001) (raising the question of “whether an increase or decrease in
available variety, by itself, merits independent consideration in antitrust analysis”).
269
See, e.g., G. Sohn, Tr. I at 134 (“The Internet actually takes away the gate keepers, so people can engage
in democratic discourse, eCommerce, innovation. It’s been great. And at a certain point, we have to ask
ourselves, do we want it to remain that way?”).
61
and prevent networks from recouping their investments from a broader base of
customers, a practice which might, in turn, reduce prices for some end users; (7) vertical
integration by network operators into content and applications and certain bundling
practices may produce efficiencies that ultimately benefit consumers; and (8) there is
insufficient evidence of potential harm to justify an entirely new regulatory regime,
especially when competition in broadband services is robust and intensifying and the
market is generally characterized by rapid, evolutionary technological change.
1. Historical and Existing Non-Neutrality of the Internet
Opponents of network neutrality regulation argue that the Internet is not, and
never truly has been, “neutral.”
270
These opponents generally agree that the first-in-first-
out and best-efforts characteristics of the TCP/IP data-transmission protocol have played
a significant role in the development of the Internet.
271
They point out, however, that
since the earliest days of the Internet, computer scientists have recognized that data
congestion may lead to reduced network performance and have thus explored different
ways of dealing with this problem.
272
Net neutrality opponents point out that all network routers must make decisions
about transmitting data and argue that such decisions invariably have implications that
may not be strictly uniform or neutral. In particular, they note that networks have long
employed “hot potato” routing policies that hand off to other networks at the earliest
possible point data that is not destined for termination on their own networks. A
principal goal of hot potato routing is to reduce the usage of network resources.
273
Opponents note that, during periods of congestion, data packets may be rerouted along
another path or dropped altogether and that packets may need to be re-sent when
transmission errors occur.
Opponents of net neutrality regulation argue that the TCP/IP protocol itself may
have differential effects for various content and applications.
274
For example, static Web
page content like text and photos and applications like e-mail generally are not sensitive
to latency. Thus, users typically can access them via the TCP/IP protocol without
270
See, e.g., Ryan, Tr. I at 238 (“IP networks do prioritize. They have from the beginning of time. The
prioritization that they had in the network at its inception was basically a first in line prioritization, first
in/first out. So it’s prioritization based on time, and time alone.”). See also McTaggart, supra note 117.
271
See supra Chapter I.A for a discussion of the TCP/IP protocol.
272
See generally supra Chapter I. See also Peha, Tr. I at 17 (“Actually, the [TCP/IP] protocol for 35 years
has allowed priority. But, for the most part, people haven’t used it. Or even implemented it.”).
273
See, e.g., McTaggart, supra note 117, at 10-12.
274
See, e.g., Yoo, Tr. II at 219. According to Yoo, “every protocol inherently favors some applications
over others. TCP/IP, first come, first served, very good at some things, worse at others. In a sense, there is
no neutral way to go here, by choosing one protocol over the other, you will actually be choosing winners
and losers.” Id.
62
noticeable problems, even during periods of congestion. Applications like streaming
video and videoconferencing, however, may be sensitive to latency and jitter.
275
Net
neutrality opponents argue, therefore, that while first-in-first-out and best-efforts
principles may sound neutral in the abstract, their practical effect may be to disfavor
certain latency- and jitter-sensitive content and applications because prioritization cannot
be used to deliver the continuous, steady stream of data that users expect even during
periods of congestion.
276
Network neutrality critics also note that content providers increasingly are using
local caching techniques to copy their content to multiple computer servers distributed
around the world, and argue that this practice effectively bypasses the first-in-first-out
and best-efforts characteristics of the TCP/IP protocol.
277
Critics further observe that
network operators have preferential partnerships with Internet “portal” sites to provide
users with greeting homepages when they log on, as well as customized and exclusive
content and applications.
278
Similarly, they note that portals, search engines, and other
content providers often give premium placement to advertisers based on their willingness
to pay.
279
In their view, these practices all constitute additional indicia of existing non-
neutrality.
2. Prioritization, Blockage, and Network Management Requirements
Network neutrality opponents frequently argue that operators should be allowed
actively to restrict or block data that they believe may be harmful to the performance of
275
See, e.g., Pepper, Tr. I at 85-86 (“The problem with non-discrimination is that it does not recognize that
treating different packets differently is necessary for the effective delivery of many services. As more real-
time interactive services dominate Internet traffic, it’s going to be more important to differentiate among
packets.”). See also McTaggart, supra note 117, at 12-14.
276
Some network neutrality proponents, such as Wu, have concluded that, “[a]s the universe of applications
has grown, the original conception of [Internet Protocol] neutrality has [become] dated; for IP was only
neutral among data applications. Internet networks tend to favor, as a class, applications insensitive to
latency (delay) or jitter (signal distortion).” Wu, supra note 227, at 149. Expanding on this point, some
network neutrality opponents, such as Yoo, have concluded that, because “TCP/IP routes packets
anonymously on a ‘first come, first served’ and ‘best efforts’ basis . . . it is poorly suited to applications that
are less tolerant of variations in throughput rates, such as streaming media and VoIP, and is biased against
network-based security features that protect e-commerce and ward off viruses and spam.” Christopher S.
Yoo, Beyond Network Neutrality, 19 H
ARV. J.L. & TECH. 1, 8 (2005). Therefore, in his view, “[c]ontrary to
what the nomenclature might suggest, network neutrality is anything but neutral.” Id.
277
See, e.g., McTaggart, supra note 117, at 6-7 (discussing Google’s distributed computing network).
278
See, e.g., id. at 4-5 (discussing network partnerships with portals such as Yahoo!, Microsoft MSN, and
Lycos). See also Waz, Tr. II at 162 (discussing the premium placement of portals on mobile phones).
279
See, e.g., McCormick, Tr. I at 273 (“[I]f any of us want to kind of envision what prioritization on the
Internet might look like, I mean, I think the clearest understanding of what we know prioritization would be
is looking at a Google search page.”).
63
their networks,
280
citing reports that a relatively small number of users can potentially
overwhelm network resources through the use of bandwidth-intensive applications, such
as peer-to-peer file-sharing and streaming video.
281
They warn that active network
management, prioritization, and other types of quality-of-service assurances are needed to
prevent the Internet, or its individual parts, from slowing down or crashing altogether in a
high-tech “tragedy of the commons.”
282
In their view, merely expanding network
capacity is expensive and may not be the most cost-effective method of network
management, and future content and applications may be even more resource-intensive
than applications like BitTorrent are today.
283
3. Efficiencies and Consumer Benefits from Prioritization
Network neutrality opponents argue that market transactions for prioritization and
other forms of quality of service can, in many cases, allocate scarce network resources in
280
Network neutrality proponents generally allow that some active management is necessary to maintain
network performance, but typically maintain that it should be limited. See, e.g., P
UBLIC KNOWLEDGE,
PRINCIPLES FOR AN OPEN BROADBAND FUTURE: A PUBLIC KNOWLEDGE WHITE PAPER (2005), available at
http://www.publicknowledge.org/pdf/open-broadband-future.pdf. According to this group, “[s]ome have
maintained that network operators must have the ability to restrict access to the network for legitimate law
enforcement purposes, or for network management. While these examples may be valid, this authority can
be easily abused and should not be broadly permitted.” Id. at 10.
281
See supra Chapter I.C.1.
282
See, e.g., McCormick, Tr. I at 243. According to McCormick, “[a] better Internet doesn’t simply come
by adding capacity. Like road networks, rail networks, electrical networks, and traditional telephone
networks, the advanced networks that comprise the Internet cannot function efficiently and cost-effectively
without management. No network has ever been built without regard to prioritization of traffic, peak loads,
and capacity management.” Id. Wireless network operators, in particular, argue that because their
networks may not have as much bandwidth as other wireline providers, they must be allowed to limit or
block certain content and applications like BitTorrent and to otherwise actively manage the use of their
networks’ resources. Network neutrality opponents state that any unintended consequences produced by
neutrality rules may have particularly acute consequences for such networks. See, e.g., Altschul, Tr. II at
51 (maintaining that applying network neutrality regulations to wireless broadband networks “would have
unique effects and they would be negative effects”).
283
See, e.g., Thorne, Tr. II at 34-39 (discussing the costs of deploying broadband networks). According to
Thorne:
When Verizon puts its fiber down a street, it costs us, in round numbers, $800 per home.
It costs us again, in round numbers, another $840 to connect the home that actually takes
the service. We spend the money to pass the home, but we don’t know whether the
customer is going to buy broadband service at all, or buy it from us.
Id. at 39. See also Schwartz, Tr. I at 255 (“Economically, it doesn’t make sense that the solution is always
to build more. That’s going to involve carrying a lot of excess capacity, which is going to be expensive.”);
T. Randolph Beard et al., Why ADCo? Why Now? An Economic Exploration into the Future of Industry
Structure for the “Last Mile” in Local Telecommunications Markets, 54 F
ED. COMM. L.J. 421, 430 (2002)
(estimating the cost of fiber-optic wireline deployment in a metropolitan area at approximately $3 million
per mile).
64
a manner more consistent with the actual priorities of end users.
284
Opponents further
suggest that prioritizing streaming telemedicine video, for example, ahead of e-mail or
network gaming transmissions to reduce latency and jitter would be socially beneficial.
285
Net neutrality opponents thus argue that network operators should be allowed to
prioritize the transmission of certain data or provide quality-of-service assurances for a
fee in the same way that consumers pay for priority mail service. Some observers note
that many other types of paid prioritization arrangements such as first-class airline
seating, congestion pricing for automobile traffic and public transportation, and premium
advertisement placements are commonplace and generally considered to be socially
beneficial.
286
In addition, they dispute the notion that non-prioritized data will be
relegated to an unacceptable, antiquated slow lane. Rather, they argue that non-
prioritized data traffic will continue to receive an acceptable level of basic service that
will continue to improve over time along with more general advances in data
transmission methods.
287
4. New Content and Applications and the Need for Network
“Intelligence”
Network neutrality opponents argue that new types of specialized services and
premium content require sophisticated, “intelligent” data-traffic management at both the
core and edges of the Internet.
288
Principal examples include VoIP, streaming video for
movies and telemedicine, large video download files, interactive network video games,
and customized business applications. In their view, “dumb” networks based on the
original TCP/IP protocol’s first-in-first-out and best-efforts standards are becoming
284
See, e.g., Schwartz, Tr. I at 255-56 (“[I]t makes sense to use the price system as a signal of which things
merit priority.”).
285
See, e.g., McCormick, Tr. I at 244 (“A communication about your health, for example, is clearly more
important than how quickly your kid can download a video featuring the antics of someone’s pet
hamster.”).
286
See, e.g., Sidak, Tr. I at 112 (“Obviously, we observe price discrimination in competitive markets all the
time.”). See also Farrell, Tr. I at 157 (“Price discrimination, as you have probably all heard many
economists say in forums like this, is not necessarily harmful. And that’s correct, given the other
alternatives available.”).
287
See, e.g., J. Gregory Sidak, A Consumer-Welfare Approach to Network Neutrality Regulation of the
Internet, 2 J.
COMPETITION L. & ECON. 349, 355 (2006) (“Rather than being forced down Lessig’s ‘digital
equivalent of a winding dirt road,’ these content providers would be relegated to something more like a
business-class seat on a flight to Paris.”).
288
See, e.g., Verizon Communications Inc., Public Comment 60, at 6-8. Verizon, for example, suggests
that “[n]ew Internet content and applications require innovative new broadband delivery methods” and that
networks need to be able to prioritize data “to manage bandwidth and control traffic on their network – for
example, to offer different levels of service for content and applications providers to reach their
customers.” Id. at 7-8.
65
increasingly outdated for certain content and applications.
289
Opponents argue that many
of these newer applications are sensitive to different levels of speed, latency, jitter,
symmetry, bursting, and capacity. For example, virtual teleconferencing generally
requires high speed, low latency, and symmetry, while some one-time video downloads
might require only high speed. By contrast, VoIP does not require significant bandwidth,
but is sensitive to latency and jitter. Neutrality critics argue, therefore, that network
intelligence will be increasingly necessary to provide the optimal transmission climate for
each of these new types of content and applications and that both content and applications
providers and other end users should be allowed to purchase services appropriate to their
particular needs.
5. Network Innovation and Competition
Network neutrality opponents contend that network operators should be allowed
to innovate freely and differentiate their networks as a form of competition that will lead
to enhanced service offerings for content and applications providers and other end users.
This perspective has been described as an argument in favor of “network diversity.”
290
Thus, opponents believe that network operators should be able to experiment with new
data-transmission methods and a variety of business plans to better serve the evolving
demands of end users. If such experiments turn out to be failures, network operators will
learn from their mistakes and improve their offerings or simply return to the status quo,
consistent with the normal dynamics of the market process.
291
In their view, a ban on
prioritization would effectively restrict new types of competition, hinder innovation,
potentially preclude price reductions for consumers, hamper efficiencies, and lock in one
kind of business model.
292
They warn that in the nascent and evolving market for
broadband services, mandating a single business plan is likely to lead to inefficient and
unintended outcomes.
293
They also assert that allowing content and applications
289
See, e.g., Adam Thierer, Are “Dumb Pipe” Mandates Smart Public Policy? Vertical Integration, Net
Neutrality, and the Network Layers Model, in NET NEUTRALITY OR NET NEUTERING: SHOULD BROADBAND
INTERNET SERVICES BE REGULATED?, supra note 42, at 73. See also Pepper, Tr. I at 81-83.
290
See, e.g., Yoo, supra note 276, at 9 (“In other words, standardization of TCP/IP would have the effect of
narrowing the dimensions of competition, forcing networks to compete solely on the basis of price and
network size.”).
291
See, e.g., Yoo, Tr. II at 220 (“If we have four players and one wants to experiment with a different
architecture, if they are wrong, they will get hammered and they will come back to the fold. If they are
right, it’s precisely the kind of innovation we should tolerate and encourage.”).
292
See, e.g., American Bar Association Section of Antitrust Law, Public Comment 2, at 8 (“Ultimately, we
believe that the competitive process will drive investment and innovation in the Internet. That investment
and innovation will inure to the benefit of all consumers. We do not think that imposing non-
discrimination statutes, regulations or policies will offer any offsetting benefits economically.”).
293
See, e.g., Pepper, Tr. I at 88 (“[One] concern is really whether net neutrality regulation designed to
prevent anti-competitive conduct could limit, or prohibit consumer welfare-enhancing network
functionality and management, as well as discourage innovation. In other words, regulation is not
costless.”).
66
providers to purchase quality-of-service assurances and prioritization may allow new
content and applications providers to counteract the competitive advantages typically
enjoyed by incumbent providers, such as the ability to pay for large server farms or third-
party data caching services.
294
6. Network Investment and Potential Consumer Benefits
Opponents argue that prohibiting network operators from charging different
prices for prioritized delivery and other types of specialized services and premium
content will make it more difficult to recoup the costs of infrastructure investments and,
thereby, reduce incentives for network investment generally.
295
They argue that both end
users and content and applications providers should be free to select any level of service
provided by network operators under market-negotiated terms.
296
Network neutrality opponents also stress that, although the Internet began as a
research and government communications network, its explosive growth since the mid-
1990s has been fueled mainly by private, risk-bearing investment.
297
They emphasize
that the individual, decentralized networks that make up the Internet mostly are owned
and operated by private companies and, generally speaking, are private property, even
though they may be subject to certain legal requirements like rights of way
permissions.
298
They point out that deploying and upgrading broadband networks can
entail billions of dollars in up-front, sunk costs.
299
Thus, they argue, any regulation that
reduces network operators’ ability to recoup their investments also effectively increases
294
Similarly, some network neutrality opponents argue that efforts by current leading content providers to
codify the status quo under the guise of neutrality rules are really nothing more than a veiled strategy to
commoditize data transmission and, thereby, preserve their own existing competitive advantages against
possible competitive threats based on new data-transmission techniques. See, e.g., Yoo, supra note 276, at
9 (“[T]he commodification of bandwidth would foreclose one avenue for mitigating the advantages enjoyed
by the largest players.”). See also George S. Ford et al., Network Neutrality and Industry Structure 1
(Phoenix Center Policy Paper No. 24, 2006) (“[P]olicymakers should avoid Network Neutrality mandates
that have the intent or effect of ‘commoditizing’ broadband access services since such a policy approach is
likely to deter facilities-based competition, reduce the expansion and deployment of advanced networks,
and increase prices.”).
295
See, e.g., Lenard, Tr. I at 181 (arguing there is a “striking lack of concern about the effect on incentives
to invest and innovate”).
296
See, e.g., Sidak, Tr. I at 107 (“Well, why do you need to have a federal law prohibiting one kind of
transaction, when you’re perfectly happy with the other?”).
297
See, e.g., Waz, Tr. II at 155-61. Waz states that “[a]ll that competitive investment is what makes it
possible for a Google and Yahoo! and eBay and Amazon and others to be here today . . . .” Id. at 158.
298
See, e.g., Bruce Owen & Gregory L. Rosston, Local Broadband Access: Primum Non Nocere or
Primum Processi? A Property Rights Approach, in N
ET NEUTRALITY OR NET NEUTERING: SHOULD
BROADBAND INTERNET SERVICES BE REGULATED?, supra note 42, at 163.
299
See, e.g., Thorne, Participant Presentation, at 1 (identifying Verizon Communications capital
expenditures of approximately $45 billion during 2004-06).
67
their risk profile to investors and, accordingly, would prompt capital markets to demand
an adjusted, higher rate of return. They suggest such an increase in the cost of capital, in
turn, would decrease the likelihood that projects underway could be completed on their
planned scale.
300
In addition to reducing incentives for network investment generally, opponents
argue that banning network operators from selling prioritized data delivery services to
content and applications providers will prevent networks from recouping their
investments from a broader base of customers.
301
In particular, they suggest that
networks should be allowed to experiment with a model in which content and
applications providers pay networks for prioritization and other premium services in the
same way that merchants pay for the placement of advertisements in newspapers and
other publications.
302
They suggest that such a business model might reduce prices for
some end users, much as advertising subsidizes the subscription prices of ad-supported
publications, thereby allowing marginal customers to afford broadband service.
303
They
further suggest that such increased end-user penetration would also increase the effective
demand for content and applications, generally, and thereby benefit their providers.
304
7. Economies of Scope from Vertical Integration and Bundling
Net neutrality opponents argue that vertical integration by network operators into
content and applications, along with related bundling practices, may produce economies
300
Sidak, supra note 287, at 357. In addition, some commentators characterize neutrality rules as being a
kind of regulatory taking of private property that can no longer be justified under a theory of natural
monopoly or other similar grounds. See, e.g., Thomas W. Hazlett, Neutering the Net, F
IN. TIMES, Mar. 20,
2006; Richard A. Epstein, What We Need is Regulatory Bed Rest, FIN. TIMES, Mar. 20, 2006. Both articles
are available at http://www.ft.com/cms/s/392ad708-b837-11da-bfc5-0000779e2340.html.
301
See, e.g., Yoo, Tr. II at 217 (“[W]e need to allow more flexibility on the server side. . . . Part of those
costs should also vary based on who, which servers, which content and applications providers need those
services.”). See also Sidak, supra note 287, at 367-68.
302
See, e.g., Yoo, Tr. II at 217 (“[W]e have learned in fact, these are two-sided markets. Basically,
upgrades to the network have to be paid for either by consumers or by the server content application
side.”). See also Schwartz, Tr. I at 258-59 (“[N]obody knows what the right pricing structure is. I don’t
claim to know it; nobody does. There is no presumption that the right structure is to recover all of the cost
of consumer broadband networks from consumers alone.”). Other examples of two-sided or, more
generally, multi-sided markets include credit cards (involving merchants and cardholders); dating services
(men and women); video game platforms (developers and players); and telephone networks (callers and
receivers). See generally Jean-Charles Rochet & Jean Tirole, Two-Sided Markets: A Progress Report
(Institut d’Économie Industrielle (IDEI), Toulouse, Working Paper No. 275, 2005), available at
http://idei.fr/doc/wp/2005/2sided_markets.pdf
.
303
See, e.g., Schwartz, Tr. I at 259 (“What economics predicts–and it’s independent of a monopoly or–it’s
independent of the degree of competition in broadband access–the prediction is if you allow them to charge
content providers, in their own interest they will now reduce prices to consumers, and therefore, encourage
penetration.”).
304
See, e.g., id.; Sidak, supra note 287, at 367-68; Sidak, Tr. I at 114-15.
68
of scope and price reductions. They point out that many areas of telecommunications are
increasingly converging. For example, both cable and traditional telecommunications
companies increasingly are offering “triple-” and “quadruple-play” bundles of high-speed
data, telephony, television, and wireless services.
305
In addition, they state that the
vertical integration of distribution with other types of media content is already
commonplace because consumers typically do not want distribution alone, but, instead,
want the particular content enabled by that distribution.
306
Some opponents also suggest
that the prospect of additional revenue streams derived from vertical integration and
bundling could promote additional competition in last-mile broadband services and
provide other benefits to end users.
307
8. Insufficient Evidence of Harm to Justify New Regulation
Network neutrality opponents argue that there is insufficient evidence of harm to
justify an entirely new ex ante regime, particularly when, in their view, competition in
broadband services is robust and intensifying due, in large part, to de-regulation. They
state that, apart from the Madison River case, which was quickly resolved by the FCC,
the harms projected by network neutrality proponents are merely hypothetical and,
therefore, do not merit new rules.
308
Also, they note that a number of network operators
have publicly pledged not to block or degrade end users’ use of their services.
309
They
305
See generally Marguerite Reardon, Cable Goes for the Quadruple Play, CNET NEWS.COM, Nov. 7, 2005,
http://news.com.com/2100-1034_3-5933340.html. See also generally Your Television is Ringing,
ECONOMIST, Oct. 14, 2006, at 3 (special survey of telecommunications convergence).
306
See, e.g., Lenard, Tr. I at 177 (“So what may be needed for a successful business model may be a
bundled product offering that is sufficiently attractive to attract enough consumers to become subscribers at
prices that are going to pay off the costs of these very large investments.”). See also Thomas L. Lenard &
David T. Scheffman, Distribution, Vertical Integration and the Net Neutrality Debate, in N
ET NEUTRALITY
OR
NET NEUTERING: SHOULD BROADBAND INTERNET SERVICES BE REGULATED?, supra note 42, at 1, 13.
307
See, e.g., Rosston, Tr. I at 164-65. According to Rosston, “some of these vertical relationships that
people are concerned about that may increase the profits of a new entrant may be the thing that is
necessary, in order to get a new entrant, in order to compete.” Id. See also Thorne, Tr. II at 57-58.
Verizon, for example, suggests that it would be interested in partnering with hospitals to develop
specialized medical applications that could be delivered over its fiber-optic wireline networks to allow the
remote treatment of patients. Id. Likewise, some observers have pointed to Google’s involvement in
advertisement-supported municipal wireless Internet systems as an example of how vertical integration
may enhance last-mile competition and benefit consumers. See, e.g., Sidak, Tr. I at 108-09; Thorne, Tr. II
at 37; Wallsten, Tr. II at 59.
308
See, e.g., Wolf, Tr. II at 143-44 (“[J]ust as a doctor would not prescribe needless medication for a
growing adolescent on the possibility that some day that adolescent might develop a condition, so, too, we
think Federal regulators are prudent to refrain from prescribing conditions that may in fact stifle or injure
needed growth.”). See also Kahn, Tr. I at 185 (“I think the lesson of history is be very, very careful that
you don’t meddle with a process that is clearly characterized by Schumpeterian [dynamic] competition.”).
309
See, e.g., Thorne, Tr. II at 40 (“[Verizon has] made clear [that] when consumers buy Internet access
capacity from us, they should be able to reach any lawful website they want to get to with that capacity, and
we do not and will not block, degrade, or interfere with consumers’ access to any website.”); Net
Neutrality: Hearing Before the S. Comm. on Commerce, Sci., & Transp., 109th Cong. 21 (2006) (statement
of Kyle McSlarrow, President & CEO, National Cable & Telecommunications Association), available at
69
argue that operators do not have sufficient power over the distribution of content and
applications
310
and, in fact, would alienate their end-user customers if they tried to
engage in such practices.
311
Furthermore, they question whether it would even be cost-
effective for network operators to search for and block specific kinds of content and
applications in an ever-expanding Internet universe, given that an increasing number of
proxy servers and encryption techniques are available to end users to counter any such
blocking.
312
Similarly, some observers suggest that if such practices are detected, end
users can quickly publicize them and thereby “embarrass” the relevant network operator
engaging in such conduct.
313
Finally, network neutrality opponents suggest that the existing jurisdiction of the
antitrust agencies and the FCC is sufficient to deal with any prospective problems
resulting from the use of new data-transmission methods.
314
Generally, network
neutrality opponents suggest that any such problems should be handled on a case-by-case
basis – not through ex ante legislation or regulation.
315
They express concern that any
such regime might be manipulated in order to achieve strategic, anticompetitive outcomes
or be subject to other forms of rent-seeking behavior and unintended consequences.
http://commerce.senate.gov/public/_files/30115.pdf (“NCTA’s members have not, and will not, block the
ability of their high speed Internet service customers to access any lawful content, application, or services
available over the public Internet.”).
310
See, e.g., Thorne, Tr. II at 42 (“Does Verizon have the ability to prevent Google or eBay or these others
from reaching end users, when the most we could do is temporarily shut off a couple percent of the end
users they can see? . . . There is no single broadband provider that has that kind of power.”).
311
Opponents argue that a shift away from the America Online-type walled-garden model has taken place
and predict, therefore, that customers would vigorously protest any attempt to return to it after becoming
accustomed to generally unrestricted Internet access. See, e.g., Pepper, Tr. I at 136-37.
312
See, e.g., Thorne, Tr. II at 43 (“What we are selling is precisely the capacity to reach all lawful content
and applications. Broadband providers are motivated to maximize the content and applications available to
our customers because doing that maximizes the value of our network and the sales we can make.”). See
also generally Cat and Mouse, On the Web, E
CONOMIST, Dec. 2, 2006, at 3 (The Economist Technology
Quarterly survey) (discussing the ability of networks to block end users’ access to desired content and
applications and methods that end users may employ to circumvent such practices).
313
See, e.g., Lehr, Tr. I at 44 (“So, if there is a particular behavior that a carrier is doing, some sort of
quality of service differentiation that really has no justification in cost, and looks really high-handed, it’s
very common for this to get, you know, blogged in real time, and for this to embarrass the carrier so that – I
mean, the carriers and the operators – and force them to change their behavior.”). See also Weiser, Tr. II at
92 (making the same point).
314
See, e.g., Muris, Tr. II at 122 (“If problems of the sort imagined by the advocates of regulation emerge,
the appropriate law enforcement authorities have the jurisdiction and expertise necessary to address
them.”).
315
See, e.g., Schwartz, Tr. I at 254 (“[I]f foreclosure does rise to the level of a serious competitive problem,
the right response is to address it at the time, on a case-by-case basis–at least that’s my view.”).
70
IV. DISCRIMINATION, BLOCKAGE, AND VERTICAL INTEGRATION
As discussed in the preceding Chapter, proponents of network neutrality
regulation have raised a variety of concerns about the effects of vertical integration in
broadband markets, as broadband Internet access providers have begun to offer online
content and applications in addition to their primary access services. In particular,
proponents are concerned that providers may block or discriminate against unaffiliated
content and applications, to the benefit of affiliated offerings. Because such concerns
may stem from diverse vertical arrangements, this Chapter will construe vertical
“integration” broadly to include any arrangement under which a broadband Internet
access provider may claim income generated by content or applications, such as joint
ventures and exclusive dealing arrangements, as well as outright ownership of content or
applications.
This is a particularly complicated issue because vertical integration into content
and applications provision can create both incentives to engage in procompetitive,
socially beneficial behavior and incentives to engage in anticompetitive, socially harmful
behavior. Vertical integration generally need not be anticompetitive or otherwise
pernicious
316
and is often driven by efficiency considerations.
317
For example, such
integration may facilitate further network or content and applications development, and it
may spur development of network, content, and applications more optimally suited to
each other. Both price and non-price dimensions of broadband Internet service may thus
improve. As a result, the notion that vertical integration tends generally to be
anticompetitive has been widely rejected in antitrust law and economics for several
decades.
318
Many net neutrality proponents argue that their concerns about vertical integration
arise only when there is insufficient competition in the underlying Internet access market.
In that case, a vertically integrated last-mile access provider might exercise its market
power to block access to competing content or applications, degrade the transmission of
competing content or applications, or reduce investment in best-efforts Internet access
services in favor of priority services that carry the access provider’s own or affiliated
content or applications. Other proponents, however, have concerns that are independent
of the degree of market power the access provider enjoys in the access market itself.
These include concerns about the so-called terminating access monopoly problem and the
potential “balkanization” of the Internet.
316
See, e.g., Farrell, Tr. I at 154 (concerns about vertical integration in broadband markets are substantial
but contingent, sometimes highly uncertain, and “very hard to observe, and pin down”).
317
See, e.g., Yoo, Tr. II at 213-14 (citing research by FTC Bureau of Economics Director Michael Salinger
regarding efficiencies in vertical integration in the telecommunications industry).
318
See, e.g., Joseph Farrell & Philip Weiser, Modularity, Vertical Integration, and Open Access Policies:
Towards a Convergence of Antitrust and Regulation in the Internet Age, 17 HARV. J.L. & TECH. 85, 87
(2003).
71
This Chapter of the Report discusses concerns that net neutrality proponents have
raised about vertical integration in broadband Internet services. Section A discusses
problems that are most likely to arise when a provider enjoys substantial market power in
the provision of last-mile Internet access; Section B discusses certain problems that may
arise independent of the degree of market power attributed to an access provider; Section
C discusses various benefits that may be derived from increased vertical integration in
these markets; and Section D provides a brief summary of the competing arguments and
remaining uncertainties.
Because several types of alleged problems with vertical integration are tied in
some way to price or data discrimination, and because both definitions and applications
of “discrimination” have been contentious in the broadband Internet access discussion,
319
this Chapter first briefly clarifies that the economic meaning of discrimination is that of
differentiation and is not intended to have any negative connotation.
320
Thus, this Report
– in particular, this Chapter and Chapter V – does not assume that price discrimination or
any form of product or service differentiation is necessarily anticompetitive or anti-
consumer.
321
Even where demand conditions allow a seller to price above marginal cost,
price discrimination can provide a means of increasing overall consumer welfare by, for
example, providing access to goods or services for some consumers who otherwise would
be priced out of the market.
322
319
See, e.g., Ford, Tr. II at 239 (criticizing imprecise usage of terms like “discrimination” in the broadband
policy discussion). Cf. Farrell, Tr. I at 204-05 (noting disagreement in price discrimination terminology
within Workshop, but suggesting semantic dispute is unproductive); Lehr, Tr. I at 37-38 (trying to “move
away from the loaded term” of “discrimination”); William H. Page & John R. Woodbury, Paper Trail:
Working Papers and Recent Scholarship, T
HE ANTITRUST SOURCE, Apr. 2007, at 6, available at
http://www.abanet.org/antitrust/at-source/07/04/Apr07-PTrail4=27f.pdf (criticizing Workshop participant
Sidak’s discussion of price discrimination and Ramsey pricing).
320
That is, we generally attach no negative connotation to “discrimination.” Plainly, however, as
mentioned above and discussed throughout this Chapter and Chapter V of this Report, concerns have been
raised about particular potential forms of discrimination, such as blocking or degradation of competing
content and applications.
321
Classical price discrimination can, depending on its form, involve a combination of differential pricing
and product differentiation. See generally A
RTHUR C. PIGOU, THE ECONOMICS OF WELFARE (Transaction
Publishers 2002) (1920) (articulating, among other things, a general theory of price discrimination). The
idealized model discussed by Pigou involves monopoly pricing; there is no suggestion here that any
particular entities in the broadband Internet access market enjoy monopoly power or its approximation. Cf.
William J. Baumol & Daniel G. Swanson, The New Economy and Ubiquitous Competitive Price
Discrimination: Identifying Defensible Criteria of Market Power, 70 A
NTITRUST L.J. 661, 662 (2003) (“[I]t
is competition, rather than its absence, that in many cases serves to impose discriminatory pricing.”);
Alfred E. Kahn, Telecommunications, the Transition from Regulation to Antitrust, 5 J.
ON TELECOMM. &
HIGH TECH. L. 159, 177 (2006) (emphasizing “the difference between price discriminations, such as might
be taken to reflect inadequacies of competition, and differentiations on the basis of differences in costs,
such as would unequivocally be reflective of effective competition”).
322
That is, by producing and selling additional units priced between the highest-priced good or service and
the marginal-cost good or service. Hal Varian demonstrated generally that an increase in output is
necessary for profit-maximizing price discrimination to increase welfare. See Hal R. Varian, Price
Discrimination and Social Welfare, 75 A
M. ECON. REV. 870, 875 (1985); see also generally JEAN TIROLE,
72
Product differentiation in its simplest form can be a means of offering different
versions of a good to different consumers, according to their demands. A common
example is airline travel. Although all passengers receive the same basic product
(transport from one airport to another), airlines offer different fares based on different
levels of service during the flight (first class or coach) and flexibility in making
arrangements (leisure travel advance fares or last-minute business fares). By linking
price and product differentiation, a seller may be able to capture profits that would have
been available under unitary pricing and yet serve segments of the market that otherwise
would be excluded.
323
A. Last-mile Access Concerns Contingent on Market Power
Some net neutrality proponents have argued that vertically integrated broadband
providers possessing market power in the provision of last-mile access could leverage
that power in ways ultimately harmful to consumers. There are two major related
concerns. First, such providers could have incentives to discriminate against competing
content or applications providers.
324
Second, such providers could have incentives to
underinvest in the facilities used to provide common, best-efforts Internet access services.
Because techniques such as deep packet inspection can reveal source or content
information, there is some concern that vertically integrated providers with sufficient
incentives to discriminate against competing content could do so.
325
Such blocking could
take several forms. A broadband provider with an interest in content or applications
could block competing content or applications outright. Less extreme forms of
discrimination could impose degraded or otherwise inferior transmission on competing
THE THEORY OF INDUSTRIAL ORGANIZATION 137-39 (1988). Several Workshop participants applied this
general point to the broadband competition discussion. See, e.g., Sidak, Tr. I at 114-15. Several others
focused on the particular variant of so-called Ramsey price discrimination, observing, for example, that
Ramsey pricing is “the most efficient way to recover fixed costs.” See Yoo, Tr. II at 217; Lehr, Tr. I at 38.
In a seminal paper based on then-current models of monopolist price discrimination, Frank Ramsey
considered how a proportionate tax system might be structured to raise a given amount of revenue while
imposing a minimum decrease in utility. See F.P. Ramsey, A Contribution to the Theory of Taxation, 37
E
CON. J. 47, 47 (1927). The most general answer – that, “the taxes should be such as to diminish in the
same proportion the production of each commodity taxed” – provided a foundation not just for models of
taxation, but for, among others, utility rate structures and constrained price discrimination. See id.
Ramsey’s model mirrors monopolist price discrimination, but does so subject to a profit constraint.
323
See PIGOU, supra note 321, at 279-80.
324
See, e.g., Farrell, Tr. I at 156.
325
See Michael Geist, ISP Must Come Clean on Traffic Shaping, TORONTO STAR, Apr. 16, 2007, at D5,
available at http://www.thestar.com/sciencetech/article/203408
. See also supra Chapter I for a discussion
of deep packet inspection and other traffic-shaping technologies.
73
content. For example, such content might be denied access to prioritized routing,
326
relegated instead to best-efforts or otherwise inferior routing.
327
1. Discrimination against Competing Content and Applications
Some net neutrality proponents have argued that, if a broadband provider had a
financial stake in particular content or applications, it could have an incentive to block its
competitors’ content or applications.
328
In broad economic terms, one Workshop
participant identified the potential incentives to block competing content or applications
as the incentives to “resist substitutes”
329
for complementary goods in which the
integrated entity has a stake.
330
The incentive to block competitors could, for example, be to protect the primary
(broadband Internet access) market from future competition, especially from content or
applications providers that might themselves seek a presence in the access market;
331
or
the access provider could seek to facilitate price discrimination in the primary market.
332
326
In the alternative, the broadband provider could charge a very high price to competing content providers
to access priority routing.
327
See, e.g., CENTER FOR DIGITAL DEMOCRACY, LIFE IN THE SLOW LANE: A GUIDE TO THE UN-NEUTRAL
NET (2006), available at http://www.democraticmedia.org/issues/UNN.html.
328
See, e.g., G. Sohn, Tr. I at 116 (regarding “the possibility” that a provider would “favor certain
applications, content, and services”); cf. Libertelli, Tr. I at 76 (alleging actual applications discrimination or
blocking in wireless broadband 3G markets).
329
Farrell, Tr. I at 156. Farrell points out that if the broadband provider were allowed to charge competing
content providers a price for access equal to profits the broadband provider would lose by customers
buying the competing content instead of his own content, then there would be no incentive to block access.
However, this would lead to a very high price for the content – even monopoly levels. See also Rosston,
Tr. I at 163.
330
Some cable companies providing broadband service are currently integrated into IP telephony (in
addition to cable services, including video on demand). Conversely, some telephone companies providing
broadband service are currently integrated into cable-type video services (in addition to telephone services).
For example, AT&T through its affiliation with Akimbo Systems will branch out into other Internet content
as well. See Laurie Sullivan, AT&T Aims for Internet Television, T
ECHWEB TECH. NEWS, Apr. 18, 2006,
http://www.techweb.com/wire/networking/185303601
. IP telephony faces competition from third-party
providers such as Vonage, while video on demand services are now beginning to see competition from
third-party sources. See, e.g., Saul Hansell, Smaller Video Producers Seek Audiences on Net, N.Y.
TIMES,
Oct. 6, 2005, at C1, available at
http://www.nytimes.com/2005/10/06/technology/06video.html?ei=5090&en=042ceaad45ac8536&ex=1286
251200 (smaller producers trying to bypass traditional TV networks and sell directly to consumers over
Internet).
331
See Farrell & Weiser, supra note 318, at 109-10.
332
See id. at 107 (“Participating in, or dominating, the applications market can help a platform monopolist
to price discriminate; this objective may make even inefficient vertical leveraging profitable.”).
74
The assumptions underlying these concerns are controversial. First, to the extent
that such concerns about vertical integration depend on the vertically integrated entity
having significant market power in a relevant broadband Internet access market, there is
considerable disagreement as to whether such market power exists.
333
Even if an access
provider has sufficient market power to discriminate against competitors in
complementary content or applications markets, there remains the question of whether it
has sufficient incentive to do so. In an oft-cited article suggesting that there are
legitimate concerns about vertical integration in broadband markets, Farrell and Weiser
(both of whom participated in the Workshop) observed that an access provider,
depending on various contingencies, might or might not have sufficient incentives to
block competition in content or applications markets.
334
In that article, Farrell and
Weiser argue that “[p]rice discrimination need not in itself be inefficient or anticonsumer,
but the platform monopolist’s desire to price discriminate can . . . lead it to exclude
efficient competition or price competition in complementary products.”
335
They further
argue, however, that “platform monopolists” will balance the fact that the platform
business is more valuable when complements are supplied efficiently against the
possibility that “competition in the complement can sometimes threaten the primary
monopoly.”
336
Others argue that countervailing incentives are dominant and that discrimination
problems are merely hypothetical.
337
Specifically, they assert that a broadband access
provider’s chief incentive is to maximize the value of its core business – its network – to
present and potential customers.
338
Because that value depends centrally on the content
and applications to which the network provides access, several Workshop participants
maintained that providers would not have an adequate incentive “to limit their end users’
experience on the public internet.”
339
333
Chapter VI of this Report, infra, discusses more fully the present and (likely) future state of competition
in broadband access markets.
334
See Farrell & Weiser, supra note 318, at 100-01.
335
Id. at 108.
336
Id. at 109.
337
See, e.g., Lenard, Tr. I at 195. See also U.S. INTERNET INDUS. ASSN, NETWORK NEUTRALITY AND
TIERED BROADBAND (2006), available at http://www.usiia.org/pubs/neutrality.doc.
338
See Lenard & Scheffman, supra note 306, at 18-19 (“[U]nder any market structure, the platform
provider has a strong incentive to maximize the value of the platform to consumers . . . . Broadband
providers benefit from having applications and content markets that maximize value to their customers.
Anything that detracts from user value will also reduce the demand (and hence the price that can be
charged) for the platform.”).
339
Thorne, Tr. II at 42-43; see also Sidak, Tr. I at 104 (“Network operators provide a complementary
service to Internet content. They do not have an interest in reducing the supply of a complement.”).
75
Thus, the degree to which a last-mile broadband access provider has a sufficient
incentive to discriminate against competing content and applications is an empirical
question. The broadband provider must weigh potential profits from additional revenue
from additional sales of its own content, against potential losses stemming from the
diminution of content or applications that consumers view as essential complements to
the access service. Certain net neutrality proponents have cited the Madison River matter
as evidence that the incentive to discriminate is, or could be, sufficient to prompt an ISP
to block a rival’s application.