CIAO

email icon Email this citation


Islands in the Bit-Stream: Charting the NII Interoperability Debate

François Bar

Working Paper 79
July 1995

The Berkeley Roundtable on the International Economy

Copyright 1998, by BRIE

Generous support for production of the BRIE Working Papers Series was provided by the Alfred P. Sloan Foundation.

Table of Contents

1.Interconnection and Interoperability
  Interfaces
  Conflicting economic goals
  Market Structure and Policy Approaches

2.Interfaces and Strategy

3.Interfaces and Policy

  Intellectual Property Law

  1. Copyright
  2. Patent
  3. Other Forms of Intellectual Property Protection

  Antitrust Law

  1. Antitrust Law Generally: A Blunt Instrument
  2. The "Essential Facilities" Doctrine63

Regulatory Oversight

4.Conclusion


Inquiring into interconnection and interoperability is a bit like the search for justice. Everyone avows its virtue; but the agreement in principle conceals a host of divergent conceptions. And the actual practice degenerates into disputes over detail and degree. Similarly, interconnection and interoperability are singled out as essential goals by nearly everyone in the debate, from the Clinton Administration to the G7-Business Roundtable, from the National Academy's technical experts to the Public Interest Coalition's consumer advocates, 1 There agreement ends. Thereafter confusion reigns.

Even generally accepted definitions lack certainty and clear boundaries. In the telecommunications world, interconnection implies physical connectivity between different 'systems' across an 'interface' with enough compatibility to avoid harm on either side of the interface, and to hand-off information across it (in the form of sound, data or images). Here, 'systems' include networks (as when AT&T's network interconnects to Pacific Bell's) and user equipment (as when you plug your modem into Pacific Bell's network or your TV to Time-Warner Cable's set-top converter box). An 'interface' is the connection between systems specified in physical (electrical or mechanical) and/or logical (format, syntax or procedure) terms. 2 By contrast, in the computer world, interoperability is the ability of different interconnected 'systems' (prior definition now expanded to include components, sub-systems, software, databases etc.) to work together in a predictable and coordinated fashion to accomplish a common purpose. 3

But greater precision of definition only uncovers more fundamental disagreements over how much interconnection and interoperability is best, and how to get there. Much of that debate is posed as extreme alternatives: Government should or must not mandate interconnection; industry can or will never achieve interoperability through market processes; critical interfaces must or can not be proprietary and/or open and/or closed; relevant standards will have to, or won't ever be set by policy or by voluntary consensus or through competition. This paper explores three main issues behind these disagreements.

One fundamental source of friction, as we argue in section 1, is that conceptions of interconnection come from the telecommunications side, where public and consensus processes predominate, while those of interoperability are shaped by the computer industry's experience, where private market competition sets de-facto compatibility standards. As the worlds of telecom and computing collide, reconciliation of these opposing approaches creates inevitable tensions. A second problem, which we explore in section 2, is that the disagreements, framed as issues of technical merit or social optimality, are rather (and not surprisingly) about narrow self-interest. They reflect substantial differences in corporate strategies and market position—differences that are usually reconcilable only when winners impose their preferences on losers. The third issue, as we argue in section 3, is that at least three sets of policies and legal doctrines (intellectual property, antitrust, and regulatory oversight) bear on the issue of interoperability. These three approaches are not always compatible with each other and furthermore, there is considerable uncertainty as to how each will precisely apply to the NII.

If the consequent variations on the interoperability theme are enormous, so too are the implications for the NII. Consider the range: One extreme would realize a highly interconnected and interoperable NII—a network of interconnected networks interoperating via standards achieved either in a consensus-oriented and voluntary standards setting process dominated by technical expertise a la Internet, or a formal, public process with government imprimatur analogous to that of past broadcast standards (e.g., NTSC, color). 4 At the other extreme lies what might be called a Balkanized NII—an NII with islands of networks, each dominated by proprietary, closed or quasi-closed interfaces set as de-facto standards through market competition. Such a balkanized NII would have limited interconnection and interoperability analogous to today's CATV arrangements—that is, essentially separate networks closed at the originating end (with proprietary interfaces permitting only contracted parties to access the network in order to provide content to end-users) and partly closed at the terminating end (while open for the physical interconnection and basic services, closed through proprietary control of the set-top box protocols necessary to connect to and un-scramble premium and pay-per-view services.).

In between the extremes lies the contested terrain where economic opportunity and market competition are more likely to lead: An NII comprising partly self-contained networks interconnected mostly through private arrangements (e.g., interconnection agreements, licensing) and featuring highly asymmetrical patterns of interoperability, high among some populations for some purposes, low among others. Such an outcome is illustrated by the recently released Microsoft Network, which provides a significant level of interoperability for the population of PC users running Windows 95 and servers running Windows NT wherever they might be located. 5 Microsoft will consummate private interconnection agreements with other networks to reach that population. The resulting proprietary internetworking protocols will also permit some degree of interoperability with other populations using different hardware and operating systems, but with significant degradation of functionality and performance. In what follows, we explore why this outcome is likely to emerge as a reconciliation of the distinctive traditions of telecommunications and computing, how it reflects the strategic forces at play, and how it might be shaped by the various relevant policy instruments.

I. Interconnection and Interoperability

The debate about achieving seamless operation of a network of networks uses the two concepts of interconnection and interoperability. While the two are often used interchangeably or together as part of one concept, they in fact convey quite distinct meanings. The latter presumes the former, but not vice-versa: you must be somehow interconnected in order to interoperate, but interconnection does not imply any particular degree of interoperability beyond the minimum necessary to avoid harm and pass information. For example, for years incompatible fax-machines (i.e., fax machines from different manufacturers, each built to slightly different proprietary standards) were all interconnectable to the public switched telephone network, but could not interoperate. Interoperability only came later when firms migrated through market processes toward agreement on voluntary compatibility standards ratified in international standards bodies (CCITT/CCIR). 6 Interconnection is binary—you are either connected or not— but interoperability comes in degrees—you can have a little (message passing) or a lot (cooperative computing). Compared to simple interconnection, interoperability presupposes a higher level of logical compatibility necessary for two systems to work in concert across an interface—and the more complete the compatibility, the greater the interoperability.

The two concepts reflect distinct traditions: interconnection comes from the telecommunications world, interoperability from the computer world. Telecom interconnection has been about attaching "foreign" devices to the dominant network, and attaching telephone networks to one another. Computer interoperability, by contrast, has focused on how distinct machines can run the same software, how different programs can exchange files and work together. As a consequence, each emphasizes a different set of interfaces essential to accomplish its purpose. Each favors a different set of economic impacts and market outcomes. Each is policed by a different characteristic policy framework. This section charts some of the key differences between the two approaches (summarized in table 1), emphasizing the potential for conflict as the different worlds of telecom and computing interact.

Table1: Telecom Interconnection vs. Computer Interoperability


telecom interconnection computer interoperability
interface focus appliance to network, network to network application to application, appliance to application
economic goal promote network effects reward innovation
market dynamics and policy ex-ante regulation of monopoly/dominant actor market competition and strong IP protection, with ex-post control abuse

A. Interfaces

The telecommunications and computer worlds typically collide over the issue of 'essential' interfaces—that is, which interfaces are essential for achieving interconnection and interoperability. Telecom Interconnection has been primarily concerned with the appliance-to-network and network-to-network interfaces, while computer interoperability issues have arisen primarily with two other interfaces, the appliance-to-application (what software runs on which machine) and application-to-application (how different programs exchange data or work in concert). 7 Table 2 provides some examples of interface standards for these four interfaces in the telecom world and in the computer world.

Telecom interconnection issues arose with the introduction of competition to the Bell System. Debate and policy action focused first on the appliance-to-network interface (from Hush-A-Phone in 1956 to the establishment of standards and certification programs in 1975), and then on the network-to-network interface (Execunet, Computer Inquiries I-III). 8 The main issues were the standardization of these interfaces, the obligation to provide interconnection, and the imposition of reasonable (tariffed) interconnection fees.

By contrast, the computer industry has fought pitched market battles over the two application interfaces. In the old IBM mainframe world, IBM controlled both interfaces, changed their specifications with regularity, and that way kept competitors who sought compatibility continually on the defensive. Rather than government fiat, it was market forces—in particular, the innovations of micro-computing and later, distributed client-server architectures—which eventually undermined the monopolist and created new opportunities for interoperability.

Table 2: Examples of Telecom and Computing Interfaces

interface telecom interconnection computer interoperability
appliance to network physical: modular phone or CATV plugs
logical: standardized signaling
Ethernet(10baseT)
appliance-to-applica tion (not a major concern of traditional Telecom) Windows and Intel-based machines v. MacOS and Motorola-based machines
application-to-application interface between voice-mail systems file exchange between applications (different word processors able to exchange documents)
message passing among different e-mail services (increasing degrees of compatibility: MIME attachments, etc.)
client/server compatibility
service provider to carrier interface (ONA's BSAs/BSEs as the network's APIs) APIs: OS to application interface
network-to-network MCI to PacBell, voice call [equal access] internet [TCP/IP]

Overall in the past, there was little overlap in the debates about interface standardization in the telecom and computing worlds. As the two industries have collided however, issues have grown more similar. Indeed, with increasing competition in telecommunications services, the telecom world has been forced increasingly to deal with issues at the application-to-application interface: applications (or, as they are more often called in telecom, services) provided by different competitors must now be able to pass on messages (e.g. competing e-mail services handing over messages) or work together (e.g. voice-mail services provided by a non-Bell company, interfacing with the software embedded in a Bell Company's switches). For example in the telecom realm, the regulation of this interface was one of the main objectives of the Open Network Architecture (ONA) framework proposed by the FCC. 9 Similarly, because computers increasingly need to communicate with each other, the computer industry has had to tackle the network-to-network issues traditionally within telecom's domain.

As each industry begins to experience problems that have historically confronted the other, issues that were settled in one domain are suddenly reopened in the new context as firms maneuver for market advantage. That is why, despite the relative clarity regarding, for example, essential network interfaces in the telecom world, there is no agreement in the NII debate about which interfaces are essential in the sense that control over them will shape the availability of end-to-end interconnection and interoperability. The oft-cited position of the Computer Systems Policy Project, a trade organization of major computer manufacturers, identifies all four of the interfaces described above as essential. 10 The CSPP position is usually taken as a consensus position, but it is not accepted by several major computer industry players like Microsoft, and not even fully endorsed by all CSPP members. For example, in testimony Sun Microsystems executives have identified at least six critical interfaces that only partially overlap with the CSPP four, including software interfaces between applications and operating systems. 11 Conversely, testimony by Microsoft executives makes clear that software interfaces between operating systems and applications should not be considered essential. 12 And it is hard to imagine that the dominant players in the CATV industry, which would like to retain control over functionality and standards in the set-top converter, would be willing to identify much more than the physical interconnection of customer equipment to the network as essential. 13 The word "essential" itself is ambiguous. When characterizing interfaces as essential, the CSPP implies that it is essential for these interfaces to be open in order for the infrastructure to be interoperable. When Microsoft claims that certain interfaces are not essential, it means that it is essential for Microsoft's strategy that they remain under Microsoft's control.

Lack of agreement on which interfaces are critical is hardly surprising. Maneuvering for market position aside, just describing the new issues raised by the relevant interfaces can be extremely frustrating. One source of difficulty, is the use of a overly broad —and overly vague— categories. Take as an example the CSPP's category for "applications" interfaces. The application-to-application interface covers anything from the standard used by two rival word-processing program to exchange a document, to the intricate way in which one word-processing application needs to interact with an operating system application, such as MS-Windows or the MacIntosh OS. But all software interfaces are not equal. We need to distinguish between at least two main classes, Operating Systems and User Applications. Two reasons argue for that distinction.

First, some of the most hotly debated issues in the NII arise precisely at the OS-to-application interface. A large number of them concern Microsoft, given its virtual monopoly of the PC operating system market. For example, should Microsoft be compelled to provide in Windows 95 as easy access to the networks of other information service providers like AOL or Compuserve, as it provides to its own Microsoft Network? Similarly, in telecom, one key to a successful transition to a more competitive market is to make it possible for outsiders to create services that can take detailed advantage of the features of the "operating system" which runs the phone company's switches. The framework we adopt to discuss interoperability should at least provide a useful way to describe these issues.

Second, an appliance and its operating system tend to be very tightly linked. This is the case of Intel-based PCs and their Microsoft OS, of Silicon Graphics workstations and their OS, but also of Northern Telecom Central Office switches and the software which controls them, or of CATV set-top boxes and their OS. As a result, opening that interface between appliance and OS may prove considerably more problematic than opening the interface between OS and application.

B. Conflicting economic goals

The traditional telecom and computing approaches to interfaces deliberately emphasized distinct economic goals. Telecom regulation aimed at capturing the social value of "network effects", while the computer industry's pursuit of unregulated competition sought to encourage innovation and diversity. The two goals have traditionally been viewed as a trade-off: The more regulation imposed integration and compatibility, the more it stifled diversity and innovation; conversely, the more the market promoted diversity and innovation, the greater the risks of incompatibility and dis-integration. 14 So long as telecom and computing were distinct industries, it was possible to have both integration and innovation, the one in telecom and the other in computing. This choice is reflected in the dominant policy instruments used in each case: regulation for telecom, competition and intellectual property protection for computing.

The economic bargain underlying telecommunications policy assumed that telecommunications was a natural monopoly, and that society was therefore better served by regulating that monopoly, rather than promoting competition to curtail it. One important reason for the belief, as demonstrated through the early history of the industry when AT&T progressively acquired most of its competitors, was "network effects"—that for both users and providers, the value of belonging to a network increases with the size of that network. 15 As AT&T's network grew bigger than its competitors', AT&T's service became more valuable and attracted more customers, and would ultimately have become a 'natural' monopoly had regulators not stepped in to make it a 'regulated' monopoly instead. Regulation then pursued two goals associated with containment of monopoly power and maximization of network effects: rate-base regulation with cross-subsidies to universal service to encourage diffusion of network benefits.

A similar rationale supported the telecom regulators' approach to interconnection with competing networks: because of network effects, the only possible way for competitors such as MCI or Sprint to be viable was to have comparably efficient interconnection (CEI) 16 to that afforded the dominant network. This required the ex-ante regulatory definition of interconnection standards and rules (tariffs, conditions of access, etc.). The economic benefits of network effects would then outweigh the potential benefits of unrestricted market competition. For example, the ex-ante regulation of interconnection conditions may well have precluded more innovative approaches to network design and interfaces, but this was considered a small price to pay for the guarantee that all parts of the network would be able to communicate.

By contrast, in the computer industry, advocates of broad intellectual property protection primarily intend to encourage (and reward) innovation, rather than either to encourage widespread diffusion or to guarantee interoperability by imposing common interfaces. As we explore further in section 2, the innovators, owners of the intellectual property embodied in the interfaces, may chose to price their products low and share information about their interfaces—or they may not.

Here again, as telecom and computing collide, their respective economic objectives often come into conflict. The issue is how to reconcile the goals of innovation and integration while minimizing their tradeoff. In areas where there are strong "network effects" —and the NII certainly is such an area— an economic case can be made for weaker protection of innovators. Joseph Farrell has argued that "in industries where standards and network externalities are important, there are good economic arguments for protecting intellectual property somewhat less strongly than is usually desirable" 17 . He points out in particular that each user refraining from using an innovative network-product because IP protection has raised the product's price, reduces the value of that product to society —by not making the network larger. This logic extends further: if obstacles prevent other innovators from using features of a network-product, there is similar loss to the existing network, but also "dynamic" loss of opportunities not explored. Critically, as Farrell points out, because network-products benefit from strong first-mover advantages, the initial innovation will automatically enjoy strong protection, even if it isn't particularly ingenious; simply by virtue of being first, it can preclude the development of much better alternatives (this effect is a direct consequence of the path-dependence explored in section 2).

In the end, such considerations would argue for tilting the policy balance away from increased protection for innovation, towards incentives for greater compatibility and integration in all situations where there are strong network effects. Curiously, the Working Group on Intellectual Property Rights of the Clinton Administration's Information Infrastructure Task Force appears to have reached precisely the opposite conclusion. 18 And in truth, the existing debate gives little guidance on where precisely to draw the line.

C. Market Structure and Policy Approaches

Finally, telecommunications and computing have traditionally had opposite policy approaches to standardization, which dealt with drastically different market structures and dynamics. Telecommunications networks were traditionally controlled by a few players, who also largely controlled the standards making process. This led to the establishment of standard-setting bodies with a clear mandate, which tended to define standards before introducing new products and services in the market. Telecom interconnection was therefore largely planned ex-ante, agreeing on standards and access conditions before new devices or networks were permitted into service. This approach extended beyond the national sphere, to international bodies striving for negotiated interconnection arrangements rather than confrontation in the marketplace. In the telecommunications world, many appear eager to continue with this approach and introduce competition after carefully defining interconnection procedures.

The computer industry by contrast, has always been enormously more competitive. Competing systems fought it out in the marketplace, and interoperability standards are typically the result of an ex-post resolution resulting either from a negotiation between those who survived the market battle or from recognition of a de facto winning standard. Policy oversight also is an ex-post process, attempting to mitigate the dominant position acquired by the winners, providing after-the-fact remedies once abuses have developed.

These two distinct approaches can no longer simply co-exist, in blissful ignorance of each other. As telecom and computing collide and the respective players openly compete against one another, the policy tendency (clearly exhibited in the latest round of legislative proposals) is to go for the lowest common denominator and adopt the least constraining of the two frameworks —i.e. open competition, with few policy safeguards for interoperability. Competition will drive the evolution of the NII, and problems with interoperability will be dealt with if and when they emerge from the market process. The danger, of course, is that by the time such problems emerge, they will be very hard to remedy. The next section suggests why.

II. Interfaces and Strategy

The collision of telecom and computing starkly poses the question of whether interfaces essential for achieving interconnection and interoperability should be open or closed, whether or not firms should be able to establish proprietary positions in them. The different answers that are possible to those questions reflect as many differences in corporate priorities, strategies and competitive positions. For reasons elaborated below, there is no obvious right answer that would create a social optimum. Our approach here, therefore, is to examine different answers, assess the underlying rationale and some of the alternative consequences.

Disagreements about the degree to which essential interfaces should be open or closed conceal enormous variations in competitive strategy and market position. 19 Although usually posed in binary opposition as technically inherent characteristics, open vs. closed in fact constitutes a spectrum of possibilities contingent on firm strategies and the extent of available intellectual property protection. A fully open interface is one in which the interface standard—i.e., the technical information necessary to implement the interface—is fully available on a nondiscriminatory and timely basis to anyone, usually through publication of the interface specifications and of any changes in them. For example, in allowing the interconnection of customer premises equipment (CPE) to the public switched telephone network (PSTN), the FCC adopted a fully open interface: The specifications are in the public domain, equally accessible to all, and changes in them can only be made through formal public process. By contrast, a fully closed interface is the mirror opposite, the interface standard is made unavailable to anyone other than through legally permissible reverse engineering. Fully closed in just that fashion was IBM's BIOS (Basic Input/Output System) for the original IBM-PC, the interface specification for communicating between input/output devices and the central processing unit (cpu). 20

In practice, many interface standards, and most microcomputing interfaces, lie between these extremes. They are only partly open or closed—the standard is licensed rather than published, with either the universe of licensees, the degree of documentation of the technical specifications, or the permissible uses restricted in some fashion. Very often, changes can be made unilaterally by the standard holder in ways that affect availability and timing of access to the interface specification. Where firms align along this spectrum is a matter of corporate strategy and market position. Thus, to establish an initial market foothold, SUN published many of the specifications for, and liberally licensed on nondiscriminatory terms its SPARC RISC architecture, operating system, and UNIX implementation.

By contrast, the principal standards in the PC world, originally more open to create the standard, gravitated toward closure as the standard became established and widespread: After a period of liberal licensing of second sources with the 80286, Intel now tightly controls its mpu architecture, going so far in the Pentium (80586) as to restrict access (via mandatory non-disclosure agreements) to certain technical information contained in the so-called Appendix H (which contains the instruction sequence for pipelining, a technique to increase performance in some uses). 21 Similarly, although MS-DOS was liberally licensed, each successive Microsoft Operating System release from Windows3.1 through NT and Windows 95, has been more closed. The relevant interface specifications have been less fully documented, unilateral changes have disadvantaged licensees, and usage has been restricted (e.g., Microsoft will not permit Windows 95 to be ported to the PowerPC architecture).

As implied above, open vs. closed is closely related to another spectrum of standards possibilities, whether or to what extent proprietary intellectual property interests ought to be permitted in them. Simply put, proprietary implies ownership, that the standard-holder exercises some property interest in the interface technology, specification or implementation, normally through exercise of intellectual property rights (copyright, patent, trademark). Obviously, closed and quasi-open standards rest on proprietary control—enough ownership to restrict use in the ways described above. But even fully open standards can have proprietary interests in them For example, the telephone jack that permits interconnection of CPE to the PSTN is patented even though the associated interface specifications are open.

The opposite of proprietary is public, a standard in which no owner can assert a property interest because the standard lies fully in the public domain (as, for example, do TCP-IP and other Internet standards as well as many broadcast standards like NTSC). 22 Again, there are intermediate possibilities: proprietary standards that become public via sanction in formal standards bodies, as for example Xerox's ethernet LAN standard, or quasi-proprietary standards when an owner's control is given up to a private membership organization whose member-owners become joint holders of the standard in question. For example, OSF, created in response to ATT's attempt to standardize its System V version of UNIX, purchased IBM's proprietary UNIX implementation, AIX, and standardized it among OSF members who retained preferential rights. 23

Table 3 summarizes the discussion above, identifying the alternative types of standards, their principle characteristics and competitive implications, and provides typical examples of each. 24

Table 3: Standards Typology


Public Owned
O pen/Published Equally available to all Licensed on reasonable, non-discriminatory and timely terms
Competitive
Aspects
Non-competitive
or compete on implementation
Owner seeks to maximize installed base
New entrant strategy
Compete on value-added, implementation
Example TCP/IP— Internet Dolby, Sun NFS, VHS
Restricted Available,
but government can restrict access or usage
Owner choice to license
Can restrict access, timing, price and/or use,
and make unilateral changes
Competitive
Aspects
Industrial policy to favor domestic producers
or for other reasons (e.g. export controls)
Owner seeks to maximize profits,
usually on the basis of locked-in installed base.
Markets tend toward oligopoly or monopoly
Example PAL, some public key encryption MS-Windows 95, NT; CATV converters

As we argue below, in the NII interoperability context all major players favor one of the three relevant alternatives: Consumers, industrial users, and some information service providers tend to favor open/public standards; common carriers and equipment producers who lack significant quasi-monopoly positions tend to favor open/owned standards for software patents and even some open/public standards on software where there is debate about copyrightability (many ACIS members, telephone companies), and producers with locked-in, quasi monopoly positions tend to favor restricted/owned standards (e.g., Microsoft, IBM, CATV companies).

However, the differences in position extend beyond these categories to the processes by which standards are set. Broadly speaking there are three ways to achieve compatibility standards, through government mandate, voluntary consensus, or market competition. For example, in the US, broadcast and interconnection standards have typically been set via FCC process. Industry-led or quasi-public standards bodies like the IEEE, the Internet Society or the CCITT have been responsible for a wide range of communications compatibility standards (e.g., respectively, LAN standards, Internet standards, and fax protocols). In stark contrast, computer industry interoperability standards have been set mostly through market competitions which result in de-facto standards. In practice, these approaches are not mutually exclusive. It is not unusual, for example, for de-facto market standards to be later ratified via consensus or public processes (as happened with the Xerox Ethernet and IBM Token Ring LAN standards). And industry-led, voluntary standards processes very often take on aspects more characteristic of the market dynamic that characterizes de-facto standards competitions—increasingly complicated rivalries in which increasingly sophisticated game theoretic calculations are possible and made by the players: It is well known that standards organizations are heavily influenced by major producers, often captured positively in the sense that they are dominated, or negatively in the sense that they are a response to de-facto dominance (or its threat). 25 Indeed, participation in standards committees is an integral part of product development strategies in information technology industries. 26 Those who master the process can significantly influence choice of standards and final outcomes. 27

Not surprisingly, the overwhelming weight of industry opinion in the debate on NII interconnection and interoperability favors industry-led voluntary and de-facto market processes for achieving standardization. 28 And here we run into a major-league set of complications because standards have funny (i.e., non-neoclassical) economic characteristics. 29 Neoclassical economic theory did not formally address contexts in which firms could influence their competitive environment beyond making equilibrating price/quantity decisions. 30 More recent work in a variety of subfields—e.g., the economics of technological change, organizational change, imperfect markets and increasing returns, new trade theory, among others—has acknowledged that firms can aggressively shape market outcomes in many circumstances. 31 And, of course, observers of actual market competition routinely see that, in practice, firms do try and often succeed in shaping outcomes, especially in technology markets. In such markets, standards are a principle means to influence the competitive environment.

Like all standards, compatibility standards are carriers of technical information in a codified form, around which related industrial and consumption activities can coalesce with heightened predictability and lowered risk: Those who produce or use products that implement the standard form a complementary and reinforcing community—that is, a network. The universe of conforming products constitutes the standard network's installed base. In general, (and discounting for the moment the increasing costs of coordination often associated with size) the bigger the network the greater the benefits for users and producers (i.e., network externalities, the "network effects" described in the last section). Thus, for example, Microsoft's new standard operating system, Windows 95, will draw together a variety of producers of complementary products from PCs and peripherals to applications software and information services into a network with all of the users whose computers run the operating system. The universe of such machines will be Microsoft's installed base for Windows 95. Because Microsoft's existing DOS-Windows 3.1 installed base is huge and migration to the new standard will become increasingly attractive as others jump aboard (the so-called bandwagon effect 32 ) the Windows 95 network can be expected to eventually become quite large.

As the Microsoft example implies, compatibility standards are more than mere information vectors: In facilitating the organization of related industrial activities and in creating opportunities for consumption, standards also shape market structure and the terms of exchange. 33 Standards shape market structure, among other ways, by altering relative costs among producers, inducing demand pattern changes, raising or lowering entry barriers, creating opportunities for economies of scale and scope, facilitating a division of labor, and generating opportunities for network externalities in both production and use. For example, the NTSC broadcast standard for TV had these impacts. Its adoption altered costs among existing players, facilitating market entry by those who had invested in it, making market entry more expensive for those who had invested in other approaches. By focusing demand from the entire market of broadcasters and potential consumers on a single standard, it permitted equipment producers to invest and reach economies of scale in production. It facilitated a division of labor on both sides of the standard—permitting specialization in production and post-production of programming, broadcasting, and the manufacture of a variety of different kinds of equipment. Perhaps most important, it facilitated the emergence of nationwide TV broadcast networks.

While standards thus create the possibility for increasing returns for producers and users (e.g., embodied in scale economies or network externalities), they simultaneously impose costs. In particular, there are costs associated with adoption when alternatives must be scrapped or complementary know-how and assets developed in order to exploit the standard or fully to benefit from it. Those costs become barriers to switching between standards networks whenever the perceived benefits of staying put outweigh the perceived new adoption costs of switching. If switching costs are high, network members can be 'locked-in' in the sense that they are not likely to switch except in the highly unusual circumstance that major shifts in price, performance and/or functionality make it even more costly to stay put. The lock-in of IBM's mainframe customers endures to some extent even today in the face of the quite radical shifts in price and performance that decentralized client-server systems purport to offer.

The coexistence of large potential gains that rise with size, manipulable costs, and influence over market structure and the terms of exchange, give to standards battles many of the characteristics of competitions to develop and commercialize new technologies. 34 Because market conditions are anything but the perfectly competitive equilibrium of neoclassical models, standards choices are highly dependent on initial starting points, available resources, market context, and event sequence. Advantages can accrue to early movers (whether innovators or imitators, producers or users) who in turn can influence the choices of later players as market structure shifts. The timing and pattern of developments—choices made by both producers and especially lead-users—can significantly influence choice of standards in ways that are difficult to reverse (so-called 'path dependence'). 35 Seemingly small choices can have big impacts—just as occurred with Sony's choice to focus the initial recording capability of its Betamax VCR around hour-long TV shows rather than the multiple-hour sporting events that initially drove surging sales of the rival VHS standard—a small choice that proved decisive in negating Sony's early lead in installed base. 36 As installed bases and the size of the associated standards-network grows, players can acquire monopoly-like market power with lock-in. This is true even for products conforming to open standards, wherever producers can differentiate their implementations and maintain the differentiable features—in effect creating market niches over which they can act like a quasi-monopolist. Indeed, equipment from almost all producers of open computer platforms that run some version of UNIX will interoperate better within the brand than across brands even though all brands conform to the common standards.

In these kinds of circumstances, multiple competitive equilibria are possible: 37 Markets (and industry-led standards processes) can clear to a single standard (or closely related set) with multiple implementations as happened with the fax machine or the VCR, or with only one dominant implementation as may be happening with Microsoft PC client and server operating systems today. Multiple, but reasonably stable, competing standards-networks are also possible as is the case today in the workstation market where SUN, HP, Silicon Graphics and IBM vie for dominance with incompatible RISC architectures, and operating systems.

In general the economic characteristics outlined above hold for any kind of standard, whether public or owned, open or restricted, and no matter what the process by which the standard is achieved. But they hold especially well for owned standards, especially owned-restricted de-facto standards set through market competition. Such standards permit the standard-owner to take advantage of the peculiar economic characteristics of standards as a self conscious strategy, to manipulate its competitive environment in unprecedented ways. The aim is to establish a quasi-monopoly position, maintain high and rising barriers to entry and with them, high and rising switching costs for one's locked-in customers—in essence to reap a large and growing share of the available benefits. 38 By favoring one set of producers or users at the expense of another—which can only be done when evolution of the standard and access to it is controlled through ownership—standard holders can directly influence the allocation of available benefits.

The actions of one standard-owner then directly influence the returns to a rival. 39 Market competition consists of strategic thrusts over pricing, licensing and other assets that anticipate and forestall rival moves while attempting to structure the market to the standard-owners advantage. As market power begins to accrue with installed base, the possibilities for manipulation grow commensurably. In that sense, de facto standards competitions are market processes in which the players vie for the available consumer and producer surpluses stemming from the achievement of standardization. The winners—like Microsoft and Intel in PCs—establish de-facto standards monopolies and become wildly profitable as more and more of the available surplus accrues to them through consumer lock-in and exit of competitors. And those circumstances can tolerate a high degree of user dissatisfaction, as essentially all users of Microsoft operating systems are by now aware: barely adequate price-performance-functionality is in most cases more than sufficient to continue the locked-in relationship especially where a large investment in complementary products (e.g., applications programs) and associated learning has occurred—in those circumstances, upgrades and follow-ons need not be better or even as good as rival's products; they need be only adequate enough to deter the switch.

It is this possibility of influencing the allocation of long-term returns that makes de-facto standards competitions such fertile ground for corporate strategy and that lies so obviously behind the self-interested variations on the interoperability theme that plague the current debate. For given the economic stakes, there is little doubt that where proprietary and open standards coexist and de-facto market processes dominate, as in the market-driven NII context where telecommunications and computer industry approaches come together and the winning outcome is very uncertain, commitments in principle to broad interoperability via open standards are likely to be sacrificed on the altar of corporate advantage. Major producers will influence demand and market acceptance of open compatibility standards in ways that favor their own proprietary alternatives.

The classic case in point is the effort by European computer manufacturers to break IBM's mainframe dominance through consensus-based Open Systems Interconnection (OSI) standards. 40 Major computer manufacturers with significant installed bases 'supported' OSI by incorporating OSI standards into their products, but implemented the standards to significantly degrade performance (e.g., of database queries) relative to their own proprietary approaches (e.g., IBM's SNA). The market impact was to shift user preferences to proprietary solutions and to forestall customers from shifting to more open systems. It is not at all difficult to imagine similar things occurring in the NII interoperability context. As the next section suggests, policy is unlikely to influence this situation for the better.

III. Interfaces and Policy

Prospects for interconnection and interoperability on the NII will be shaped by policy, both at the initiative of government and through firm strategies. Three sets of policies (and associated legal doctrines) are directly relevant: intellectual property law, antitrust law, and regulatory oversight. These instruments have long played roles in shaping the telecom and computer industries. As suggested earlier, ex ante regulation has been particularly important in setting the terms of telecom interconnection, while intellectual property doctrine has helped to mold the terms of computer interoperability. Antitrust has played a role in both worlds. Again, however, as the two worlds come together, once-settled issues are being reopened.

For purposes of assessing how policy in the changed context might shape interconnection and interoperability on the NII, this section analyzes the current status of the relevant doctrines and the parameters of the debate over them. We look first at intellectual property, then at antitrust, and finally at regulatory use of compulsory licensing, compulsory interconnection, and rate-setting. There are two caveats. First, in many cases, the status of current legal doctrine is in flux and subject to intense debate among powerful interests. Second, no one can predict how courts or Congress might apply doctrines developed in one context to the new context of the NII. In short, there will almost certainly be changes from the status quo described below.

A. Intellectual Property Law

Intellectual property law has been a primary battleground for disagreements over the extent to which software and hardware interoperability may be assured. Most legal disputes over software interoperability are played out in copyright doctrine, which is aimed at protecting creative expression like that embodied in books and films, while most legal disputes over hardware interoperability are played out in patent doctrine, which is aimed at protecting more purely functional subject matter like that embodied in traditional industrial machinery. However, software and telecommunications technologies are challenging this distinction between patent and copyright: these new technologies offer a blend of functionality and creative expression. Moreover, the categories are being pushed as the distinction between software and hardware blurs. In the face of ensuing uncertainty over the doctrines, in response to the development of doctrines that some believe do not offer adequate protection, and in light of short product cycles in high technology industries, some producers may try to turn to other forms of intellectual property protection such as trade secret and licensing law.

1. Copyright

Copyright law treats software as a "literary work," offering strong legal protection to appropriate subject matter. If protection is afforded, it is relatively broad, granting a term of fifty years plus the life of the author during which the copyright holder has the exclusive, transferable right to authorize, inter alia: (1) reproduction of the program 41 (except for purposes of "fair use" 42 ); (2) public display 43 or performance 44 of the program; (3) distribution or rental of the program; 45 and (4) adaptations or alteration of the program. 46

For purposes of the interoperability debate, the crucial copyright question has been: What is protectible subject matter in the software context? Literal copying of an entire computer program is clear copyright infringement, but what if only a small part of the program is copied (e.g., the interface specifications or protocols)? What if part of the program is not literally copied, but is imitated or emulated? A central copyright principle holds that an idea is not protectible, but the expression of the idea is; and purely functional subject matter is not protectible under copyright, but creative subject matter is. These distinctions beg the question, "What software elements are 'expressive' or 'creative' (as opposed to 'ideas or 'functional') ?" Are interfaces expressive? Which interfaces? The U.S. Supreme Court has never attempted to answer the question, but appellate courts in most circuits have, and so have most of America's larger computer and software producers.

Some companies— mostly larger, more established U.S. software producers which tend to pursue closed/proprietary standards strategies, such as IBM, Apple, and Intel— support relatively broad copyright protection for software, emphasizing the creativity it embodies and the desirability of protecting software so as to encourage continuous investment in software innovation. These companies, many of which are members of the Alliance to Protect Software Innovation (APSI), have often supported the legal standard set forth in the Whelan case 47 and its progeny, which would offer protection to and consider as "creative" the unique "structure, sequence, and organization" of software code that comprises a computer program. This standard suggests inherently creative and original expression in any particular string of code; thus, the rule could be used to treat virtually any sequence of zeros and ones as "unique" expression, suggesting that most, if not all, interfaces would be protectible. For example, under this rule, a software "key" consisting of eight bits that unlocks a video-game console to permit video-games to be played on it could be found copyrightable: despite the key's apparently functional character, the Whelan rule could be used to conclude that the "unique" "sequence" of the eight bits confers creativity and so copyrightability on the key. These companies also often oppose a broad "fair use" right to copy or otherwise use software for purposes of decompilation, 48 which is one means of discovering the specifications of an interface.

Other companies— mostly producers that are challenging larger, more established firms for market share, and many of which are pursuing strategies based on relatively "open" systems, such as AMD, Sun, and 3Com— support relatively narrow copyright protection for software, emphasizing its functional character and the desirability of ensuring that alternative producers may develop and sell products that are interoperable with other software. These companies, many of which are members of the American Committee for Interoperable Systems (ACIS) often champion the legal standard set forth in the Computer Associates case, 49 which rejected the Whelan standard and called for a three-part "abstraction-filtration-comparison" test (described in detail below) to determine which elements of software may be protected by copyright. This three-part test generally has been used to deny copyright protection to software interfaces. These companies also often champion the Sega case, 50 which held that the "fair use" doctrine permits persons who are neither copyright holders nor licensees to disassemble a copyrighted program insofar as it is necessary for understanding the unprotected elements of the program, including unprotected interfaces. These companies argue that decompilation is often undertaken for the legitimate purpose of developing interoperable software and that it is necessary to understand unprotected and unpublished elements of code.

The clear trend in U.S. courts has been to reject the Whelan "structure, sequence, organization" standard and embrace the Computer Associates approach. Since 1987, courts in at least seven circuits 51 have implicitly or expressly rejected the Whelan test. And since 1991, courts in at least seven circuits 52 have embraced the Computer Associates test. The Computer Associates approach rejects central premises in Whelan that a computer program embodies essentially one idea and that "the purpose or function of a utilitarian work would be the work's idea, and everything that is not necessary to that purpose or function would be part of the expression." 53 The Computer Associates test instead relies heavily on its understanding that a computer program is made up of multiple programs and sub-programs (including interfaces), each of which is associated with at least one idea and each of which must be examined for potential exclusion from copyright protection. The "abstraction-filtration-comparison" test is intended to ascertain whether an allegedly copied program is "substantially similar" to the copyrighted program:

In ascertaining substantial similarity under this approach, a court would first break down the allegedly infringed program into its constituent structural parts. [Abstraction.] Then, by examining each of these parts for such things as incorporated ideas, expression that is necessarily incidental to those ideas, and elements that are taken from the public domain, the court would then be able to sift out all non-protectible material. [Filtration] Left with a kernel, or perhaps kernels, of creative expression after following this process of elimination, the court's last step would be to compare this material with the structure of an allegedly infringing program. [Comparison.] 54

In its discussion of the second or "filtration" step, the Court listed the types of elements that would be unprotectable under copyright law, including elements dictated by external factors, such as the mechanical specifications of the computer on which a particular program is intended to run, compatibility requirements of other programs with which a computer is designed to operate, a computer manufacturer's design standards, demands of the industry being serviced, and widely accepted programming practices within the industry.

Hence, under this approach, Sega's software key, which unlocks its video-game console for use with video-game cartridges possessing the key, was held unprotectable. In video-game cartridge software, the key constitutes an appropriate level of "abstraction" for examination. And the key was "filtered" out of the set of code that was then considered in the third step of the infringement test ("comparison"): the key was viewed as an essentially functional compatibility requirement of any program with which the video-game console is designed to operate. 55

The Computer Associates approach embodies a clear bias against a broad scope of software copyright protection and has set up a procedure for analyzing programs that supports that bias. "Abstraction" and broad "filtration" rules applied prior to "comparison" will often yield interface elements unprotected and will thereby permit competition among products that attach to a given interface. Moreover, broad "filtration" rules applied prior to "comparison" will often excise elements that would otherwise make programs appear similar, reducing the likelihood that allegations of non-literal infringement will succeed, thereby favoring competition by products that compete with the product that defines the interface. And the tendency to use broad "filtration" rules increases the likelihood that the "fair use" doctrine will permit decompilation for purposes of discovering interface specifications. 56

Nonetheless, the trend towards the Computer Associates approach still leaves many outstanding issues relating to interoperability that will permit courts to offer copyright protection to interfaces. For example, the doctrine does not dictate objectively how to select an appropriate level of "abstraction" or program elements to which the "filtration" process must be applied; selection of the level will influence the outcome of the analysis. Similarly, despite guidance in the Computer Associates opinion, the question of whether particular elements are "functional" (and so unprotected) or "expressive" (and so protected) is not always clear: thus, despite having embraced Computer Associates, the Ninth Circuit concluded that the key to Sega's "lock-out" system was not copyrightable out of the set of code that wasbecause it contained just a few lines of functional code, 57 while the Federal Circuit concluded that Atari's lock-out key was expressive because the code was capable of being expressed in a variety of ways. 58 Hence, despite the bias against broad protection of interface specifications that is implicit in the Computer Associates decision, courts will likely continue to hold that many interface implementations are protected by copyright. Courts using the Computer Associates test are likely to make copyright decisions about interfaces that affect interoperability on a case-by-case basis. Even under the Computer Associates standard, there is uncertainty as to whether software copyright protection is narrow enough to guarantee high levels of interoperability throughout the information infrastructure.

2. Patent

The main legal requirements for receiving patent protection are that the invention in question must be novel, useful, and non-obvious. Thus, in contrast to copyright, patent law is intended to cover functional subject matter. The perfection of patent protection is much more expensive than copyright and the process takes much longer. If a patent is granted, the patent-holder is entitled to protection for 20 years from the date of the patent application (considerably shorter than the copyright term).

As patents are intended to cover functional subject matter, patent law has been used most often among the NII components in the telecommunications infrastructure at the physical layer, where functional elements like switches and terminals are found. Insofar as interfacing or replicating patented physical assets are necessary in order to use or compete in the telecommunications infrastructure, patent protection theoretically could be used to constrain interoperability and put the patent-holder in a monopolistic position. In the earliest years of the Bell Company, patents provided just such a monopoly.

In the last fifteen years, however, patent law has been used with increasing frequency to afford protection to software associated with the information infrastructure. This trend makes some sense, since copyright protection is denied to software that is purely functional, and since switches embedded in hardware may be functionally equivalent to magnetic signals embedded in software media. Since the Diehr case 59 in 1981, it has become increasingly clear that many computer programs, if properly presented, are patentable subject matter: while a mere mathematical algorithm is not patentable, 60 an algorithm that is claimed as a key element of a useful invention may be patentable. Lower courts have since interpreted the Diehr decision as affirming that an algorithm may be patentable if it is applied to physical elements or process steps of an invention— if it defines specific structural or physical relationships between elements of an invention, or it refines or defines an inventive process step. 61 Thus, software patent law could become a significant threat to interoperability: while the doctrine allows patents in only limited circumstances, some interface specifications and processes could be protected, to the extent they can be linked to physical processes or devices.

Taken together, patent and software patent protection affecting interoperability is most likely at the physical layer of the telecommunications infrastructure, since physical devices and processes are frequently patented, but such protection is also possible at other layers that are more software intensive: for example, it is not difficult to see software patents granted at the applications layer in association with such devices as joy-sticks and disk drives. The potential power of such patents was illustrated in 1994, when Compton's attempted to enforce its broad patent on accessing multimedia data using multiple entry paths, a patent that since has been invalidated (but which is on appeal to the Board of Patent Appeals).

3. Other Forms of Intellectual Property Protection

Frustrated by the trend narrowing the scope of software copyright protection, and constrained by the narrow limits of software patent doctrine, some computer and telecommunications lawyers are exploring the potential for using other forms of intellectual property law to protect interfaces. For example, many companies routinely use software licenses (including shrink-wrap licenses) that purport to prohibit decompilation and otherwise protect interfaces, attempting to achieve by contract or trade secret protection that which has not been achieved through copyright or patent law. At this time, while such approaches are used often, many are skeptical about their enforceability. 62

B. Antitrust Law

Whatintellectual property law offers in the way of protection for interfaces, antitrust law could take away. As Section II suggested, a crucial NII interface, software program, or hardware that enjoys intellectual property protection may help put the company that controls it in a dominant market position. While the monopoly conferred by intellectual property protection is often exempt from antitrust law (especially in the case of patents), it is possible that antitrust law could intervene to increase competition. IBM, AT&T, and Microsoft know this well.

1. Antitrust Law Generally: A Blunt Instrument

General antitrust law is a relatively blunt instrument for attacking companies that have gained a monopoly position. Under Section 1 of the Sherman Act, vertical or horizontal agreements that restrain trade may be subject to attack. Under Section 2 of the Sherman Act, action may be taken against a company that illegally obtains or attempts to monopolize; moreover, Sherman Act Section 2 may be used against a monopolist that tries anything to make its monopoly position bigger, even if its monopoly position was legally obtained. Section 3 of the Clayton Act may be used to attack tying and exclusive contracts; Section 7 is used to block mergers that might tend to lead to monopolization or substantially lessen competition; and Section 8 is aimed at interlocking directorates. The Robinson-Patman Act deals with price discrimination. The antitrust actions that broke up AT&T and that resulted in a consent decree with IBM were based largely on the Sherman Act, while recent court action blocking the Microsoft purchase of Intuit was based on Section 7 of the Clayton Act.

While such action— breaking up a monopoly, blocking a merger, etc.— may increase competition among companies involved in the NII, or prevent a NII-related market from becoming less competitive, it is not a surgical means of ensuring interoperability on the internet.

2. The "Essential Facilities" Doctrine 63

The "scalpel" by which antitrust law might ensure interoperability on the NII is the "essential facilities" or "bottleneck monopoly" doctrine. Under this doctrine, a monopolist's refusal to provide a competitor with access to an "essential facility" necessary for effective competition may violate antitrust law. The doctrine was established by the U.S. Supreme Court in a 1912 case, U.S. v. Terminal Railway Association of St. Louis. 64 A group of railroad companies owned two bridges and a ferry boat that could transfer railroad cars across the Mississippi; the group denied access to these facilities to other railroads, thus foreclosing their competition in the St. Louis market. The Supreme Court held that the denial of access to the "essential facilities" violated Sherman Act Section 1 (agreements that restrain trade) and ordered the group to provide access at reasonable cost.

Subsequent decisions extended the doctrine to apply to action by a single monopolist under Sherman Act Section 2. 65 In the telecommunications sector, the most well-known single monopolist "essential facilities" case is MCI Communications Corp. v. American Telephone and Telegraph Co. 66 AT&T blocked MCI's interconnection with a local telephone network, which the court found to be denial of access to an essential facility and so a violation of Section 2 of the Sherman Act. At the same time, the court set forth a four-part test for invoking the doctrine: (1) control of the essential facility by a monopolist; (2) a competitor's inability practically or reasonably to duplicate the essential facility; (3) the denial of the use of the facility to a competitor; and (4) the feasibility of providing the facility. 67

While the "essential facilities" doctrine could serve as a guarantor of interoperability on the NII, serious legal hurdles would have to be overcome. First, interoperability bottlenecks on the NII are more likely to be controlled by one company than a group of companies and courts will likely be far more wary of using the doctrine in Sherman Act Section 2 cases (a single monopolist) than in Section 1 cases (an agreement among two or more parties to restrain trade). 68 For example, one court has held that "a facility that is controlled by a single firm will be considered 'essential' only if control of the facility carries with it the potential to eliminate competition in the downstream market" 69 and economic analysis in the case suggests that such control must be expected to be relatively permanent. 70 That case suggests that merely impeding competition in a related market is not sufficient for invoking the essential facilities doctrine in the single monopolist context: control of the facility must carry with it the power to eliminate at least some competitors in a downstream market. In the NII context, these distinctions become important: while network effects may confer relatively permanent "lock-in" of a standard and control over a particular market, proprietary control over interfaces with that standard may be used to extend control to upstream and related markets (i.e., not only downstream markets) and may be exercised so as to severely impede competitors (but not eliminate them).

Second, the hurdle is likely to be higher still in cases where the "essential facility" is an interface specification, or other information necessary to achieve interoperability, that is protected by intellectual property law. Patent laws appear to create a limited exception to antitrust laws, 71 but some courts have held that a monopolist's refusal to license a copyright could violate antitrust law's rule of reason. 72 Moreover, the Supreme Court has held that Associated Press violated antitrust law by entering into an agreement denying sale of its copyrighted news stories to prospective member newspapers in favor of existing members. 73 Thus, the courts could support compulsory licensing of copyrighted information in an "essential facilities" case, but some cases suggest a presumption in favor of the copyright-holder's desire to exclude others from using the protected subject matter that can be overcome only by showing an anti-competitive purpose and effect. 74

Third, some courts have been reluctant to use the Sherman Act to break up natural monopolies, particularly where the lower costs of scale economies associated with the natural monopoly are being passed on to consumers, arguing that in such cases government regulation, as opposed to harsh civil and criminal sanctions under the Sherman Act, is more appropriate. 75 Therefore, courts may be reluctant to use the Sherman Act against NII standards-holders that obtain market power through network effects and associated economies of scale; logically, that reluctance should diminish with evidence that consumers are facing higher prices or lower quality than could be expected if there were more competition associated with interoperability rights or licenses.

Taken together, these cases suggest that in the NII interoperability context, the "essential facilities" doctrine is most likely to work in cases where two or more companies together control an interface or equipment that has become a permanent de facto standard which must be used to enter a particular market. The case becomes more difficult if the interface is controlled by only one player, and more difficult still if the resulting bottleneck is not expected to be permanent, which may sometimes be the case in rapidly changing high technology sectors. The case becomes most difficult where, in addition to those hurdles, the interface is protected by patent or copyright and the firm controlling that interface is providing it to consumers and those who produce interoperable software at a relatively low cost. Hence, in its current form, antitrust doctrine will be difficult to use, available in relatively limited circumstances, expensive, and time-consuming. However, as NII industries evolve and standards get locked in, and some firms thereby obtain control over crucial NII interfaces and components, and some of those firms use that control to eliminate competition, the courts will be increasingly enticed to act under the essential facilities doctrine— even if that means modifying the doctrine

C.Regulatory Oversight

A third policy instrument will obviously impact interoperability of the NII: regulatory oversight, such as that undertaken by the FCC. While there are dozens of actions affecting NII interoperability that a regulatory oversight agency might take, one set of actions in particular creates a clear precedent for compulsory licensing at a reasonable fee. This rests in the FCC's compulsory interconnection rules, which began in the late 1950s with the "Hush-a-Phone" decision, carried forward into its decisions on "Resale and Shared Use" (1976-81), and was finally extended through the Computer Inquiries (1971 and 1980). Orders compelling interconnection were complemented by the requirement that the interconnection be offered at a "reasonable cost" and by an elaborate rate-setting scheme. Compulsory interconnection decisions increased telecommunications competition that otherwise would have been choked off by the infrastructure bottlenecks that characterized AT&T's monopoly position.

The interconnection of foreign (i.e. non-AT&T) devices to AT&T's telephone network originated with the Hush-a-Phone and Carterfone decisions. Prior to these decisions, interconnection of any kind was prohibited. The AT&T tariff simply stated that "No equipment, apparatus, circuit or device not furnished by the telephone company shall be attached to or connected with the facilities furnished by the telephone company, whether physically, by induction, or otherwise". 76 The Carterfone decision ruled that the tariff was illegal and ordered the phone company to allow interconnection of devices which didn't cause any harm to the network. In AT&T's view, most potential harm to the network would come from interference with signaling (the network's control layer). To retain control of signaling, yet make it possible for customers to connect their equipment to its network, AT&T initially offered tariffed Protective Connecting Arrangements (PCAs). PCAs were to provide all network signaling. 77 PCAs explicitly dealt with the "appliance-to-network" interface, but left all control in the hands of AT&T. However, arguing that PCAs unnecessarily restricted customers rights, the FCC eventually opted for standardization of the device-to-network interface. The "direct connection program" (implemented in 1977, after AT&T's appeal was rejected) covered both physical standardization —standard plugs— and logical —requiring interconnected equipment to meet specific technical criteria so as to avoid causing harm to the network.

The series of decisions which gave rise to network-based competition (most notably "Above 890" and "Execunet") extended that approach to the interconnection of emerging networks, specifically MCI's network, to AT&T's established network. Because AT&T's facilities were essential to the provision of network services, the regulators ordered the dominant network, AT&T, to provide equal access to these facilities, with tariffed equal access charges. As greater competition was introduced within the telecommunications sector, regulation progressively extended that approach deeper into the network's management and control layer. The approach embodied in the Third Computer Inquiry's ONA (Open Network Architecture), can be viewed as an effort to identify the critical software interfaces embedded in the phone company's switches, to establish standards for these interfaces and to mandate that they be made available to all for a reasonable fee —the attempt to define standardized, tariffed BSE (Basic Service Elements) and BSAs (Basic Service Arrangements). Comparing this approach to what has happened in the computer industry, this is akin to forcing publication and mandatory licensing for the APIs of the telecom network's operating system.

Similar actions could be taken in the future by a regulatory oversight agency, to mitigate a player's control over a critical interface or computer program on the NII. In the NII interoperability context, this could take the form of compulsory licensing of software or elements of software for a "reasonable fee." Currently, in the United States, compulsory licensing of software (or interface specifications or protocols) is not mandated by any public regulatory oversight agency. Moreover, the clear trend in the United States is toward further deregulation.

Conclusion

As the worlds of telecommunications and computing collide, the clear trend is for direct regulation to withdraw in favor of the market. This means an almost inevitable move away from the traditional telecom approach to interconnection, and toward the competitive mechanisms characteristic of the computer industry— checked by the possibility of court intervention. Market competition can be an effective regulator in many domains, but the peculiar economic character of interface standards raises real questions about whether market mechanisms will be sufficient to create a highly interoperable NII. The current legal framework offers unpredictable answers in any given case.

Current legal doctrine appears to favor interoperability for some specific elements of the NII, but also suggests great uncertainty about the extent to which any particular interface, software implementation, or device may receive legal protection. To be sure, the trend in copyright law has been against automatically protecting technical interfaces, often permitting decompilation to discover interface specifications and protocols, and against a broad scope of what constitutes non-literal infringement (i.e., against a broad scope of protection that would work against alternative implementations). Nonetheless, the doctrine is flexible enough to permit a broad scope of copyright protection in any given software case— and there are several which have protected the interface and denied the attempt to interoperate.

Patent law may provide a monopoly on some types of interfaces, particularly those that are physical devices, and some algorithms, particularly those that are tied to a device; but compared to copyrights, patents are relatively expensive, narrowly drawn, and difficult to obtain— all of which suggest that they may be something of a "wild card" in the NII interoperability equation.

If copyright or patent law were to protect a crucial interface and the right-holder exercised exclusivity, thereby creating a competition bottleneck, then an antitrust attack based on the "essential facilities doctrine" could well be launched. But such an attack would have great difficulty succeeding under current doctrine. In other words, antitrust doctrine offers the basis for a challenge to bottleneck monopolies on the NII, but high uncertainty on the question of whether such bottleneck monopolies would be legal.

Regulatory processes are notorious for being affected by the politics of the process, for the inability of government to judge price/performance (especially where it is not the principal consumer), and for the narrow time frames relative to technical advance in which government may act. 78 Current regulatory oversight rules and institutions are neither prepared nor empowered to establish a system of compulsory licenses to increase competition and enhance the prospects for interoperability.

The economic characteristics of network standardization processes—replete with "network effects" and "path dependence"—and the uncertain legal doctrines that partially regulate them, suggest strongly that in the absence of regulation or judicial action we will have an NII with de facto owned and restricted standards at important interfaces and private interconnection arrangements to surmount them. The networks that constitute the NII will offer different levels of interoperability to different customers, for different uses, at different levels of price, performance, quality, security, reliability and privacy. Within the NII bit-stream, islands of high interoperability will coexist alongside their opposite. This is admittedly a far cry from the image of universal service that animated the regulated vision of the first national information infrastructure, the Bell System.

Our analysis of NII network effects and associated economies of scale suggests that without judicial or regulatory intervention, eventually, a dominant firm will likely establish control over a crucial NII interface. And that firm will have obvious incentives to assert proprietary control over interoperable software or hardware. An island of low interoperability will likely threaten to dam the bit stream. At that point, the courts will be presented with opportunities to intervene through intellectual property law (as they have in the software industry) and antitrust law (as they did in both the telecom and computer industries), and both Congress and regulatory agencies will face pressure to intervene through regulatory action (as they have done in the telecom sector).

If market processes are unlikely to result in socially optimal levels of interconnection and interoperability, what about preemptive intervention?. Many believe that we are left with a choice between necessary evils, either imperfect policy or imperfect market. 79 But serious consideration of the problem goes beyond mere normative beliefs about government and the market: it would be very difficult to recommend a preemptive policy to assure NII interoperability without further understanding of some central questions. For example, it may be premature to identify crucial NII interfaces and it is likely too early to tell which firms will control them. It is unclear how much intellectual property protection is necessary for investment and innovation in components of the NII. It is unclear how much investment in the NII is being driven by a race for first-mover advantages and associated rents. It is hard to tell whether open or closed strategies will be pursued by firms that establish crucial interfaces— especially since those firms will have to choose a strategy in the face of uncertainty as to whether the courts will offer protection of any given interface. Ultimately, it is uncertainty about answers to questions like these that makes it difficult to recommend intellectual property, antitrust, or regulatory policies that will both assure NII interoperability and continued high rates of investment in the NII.

Footnotes

Note 1: See, Information Infrastructure Task Force, The National Information Infrastructure: Agenda for Action, (Washington D.C. IITF, September 15, 1993); G7-Business Roundtable, "Building A Global Information Society, A call for Government Action," unpublished draft, May 1995 at p.12, 24-25. The latter consists of over 30 major telecommunications, computer, publishing and entertainment multinationals from North America, Europe and Japan, all of whom have a significant economic stake in emerging information infrastructures; National Research Council, Computer Science and Telecommunications Board, Realizing the Information Future: The Internet and Beyond, (Washington D.C.: National Academy Press, 1994), at chapter. Back.

Note 2: The Computer Systems Policy Project, Perspectives on the National Information Infrastructure: Ensuring Interoperability, February, 1994, p.6. Back.

Note 3: Here we follow, but simplify, the definition advanced by the Computer Systems Policy Project, Ibid., at p.5. Back.

Note 4: Of course, even within a highly interoperable NII, there will be significant variations of degree, including proprietary enclaves that boast of greater interoperability. Today, for example, the Netscape client interacting across the Internet with Netscape servers provides higher levels of performance and increased functionality (e.g. page layout, security functions) than when interacting with other brand servers -- even though all brands of servers implement the same protocols that permit interoperability (e.g., tcp-ip, http, ftp, etc.). Back.

Note 5: On the potential for this vision to come to pass, given Microsoft's monopoly position and the economics of 'increasing returns' industries, see Gary L. Reback, et.al., "Memorandum of Amici Curiae in Opposition to Proposed Final Judgment," USA v. Microsoft Corporation, (DC District, Civil Action #94-1564), 1995. Back.

Note 6: Testimony of Kodak's Dr. Robert Sanderson on behalf of CBEMA before the US Senate, Subcommittee on Antitrust, Monopolies and Business Rights, September 20, 1994. Back.

Note 7: [7] We follow here the CSPP definitions. CSPP, Perspectives on the National Information Infrastructure: Ensuring Interoperability, op. cit. p. 5. Back.

Note 8: United States v. Western Electric Company, Inc., 1956 Trade Cases (CCH), 68246 (D.N.J. 1956) ("Hush-A-Phone"); Interstate Foreign Message Toll Tel. Serv. (Registration Program), 56 F.C.C.2d 593 (1975); 58 F.C.C.2d 736 (1976); MCI v. FCC (Execunet I), 561 F.2d 365 (D.D.C. 1977), cert. denied, 434 U.S. 1041 (1978); MCI v. FCC (Execunet II), 580 F.2d 590 (D.D.C.), cert. denied 439 U.S. 980 (1978); Computer I, 28 F.C.C.2d 267 (1971); Computer II, 77 F.C.C.2d 384 (1980); Computer III Notice of Proposed Rulemaking, F.C.C. 85-397 (Aug. 16, 1985) Back.

Note 9: Brock, Gerald, Telecommunication Policy for the Information Age: From Monopolly to Competition, Harvard University Press, Cambridge, 1994, pp. 223-227. Back.

Note 10: The Computer Systems Policy Project, Perspectives on the National Information Infrastructure: Ensuring Interoperability, op cit. Back.

Note 11: See, e.g., testimony of Dr. Eric Schmidt, Chief Technical Officer, Sun Microsystems, Inc., before the US Senate, Judiciary Committee, Antitrust, Monopolies and Business Rights Subcommittee and Technology and Law Subcommittee, September 20, 1994 (hereafter: Joint Hearings), "Interoperability and the National Information Infrastructure" Back.

Note 12: See, e.g., testimony of Nathan P. Myhrvold, Senior Vice President for Advanced Technology, Microsoft, before the US Congress, House, Telecommunications and Finance Subcommittee, February 1, 1994. Back.

Note 13: However, some players in the CATV industry are preparing for the possibility that the set-top box interface may be forced to be more open in a world of digital TV and interactive entertainment services -- see, e.g. next generation converter box operating systems like Microware OS9 and the David OS. Back.

Note 14: Bar, Fran_ois, "Network Flexibility: A Challenge for Telecom Policy", in Communications and Strategies, IDATE, Vol. 1, No 2, 1991. Back.

Note 15: More precisely, we are talking about the effects of network externalities: In general, network externalities describe the situation in which the value of the network and its use increases with the number of users - thus, in a communications network, the larger the number of users, the more valuable for both the service provider who faces increasing demand for services and customers who benefit from being able to reach a wider population. On ATT-led consolidation in the industry's early history, see for example Gerald Brock, The Telecommunications Industry: The Dynamics of Market Structure, Harvard University Press, Cambridge, 1981. pp.151-158 Back.

Note 16: As defined by the FCC at 104 F.C.C. 2d 958 parag. 154-166. Back.

Note 17: Joseph Farrell, Arguments for weaker Intellectual Property Protection in Network Industries", mimeo, 1995 Back.

Note 18: See Information Infrastructure Task Force, Working Group on Intellectual Property Rights, "Intellectual Property and the National Information Infrastructure," (Washington DC: IITF, 1994) --the so-called Lehmann Report. Back.

Note 19: A useful summary of contending positions on 'openness' among different computer industry players is contained in Jonathan Band, "Competing Definitions of 'Openness' on the NII," (SF: Morrison and Foerster, 1994), draft Back.

Note 20: Fergusson, Charles &Charles Morris, Computer Wars, Random House, New York, 1993, p. 53. Back.

Note 21: This and the Microsoft information based on conversations with industry sources. Back.

Note 22: Note that public domain standards need not be fully open. They can be restricted by governments who might, for example, preferentially favor domestic producers under broadcast or other scarce spectrum standards, as was done with European TV standards like PAL and SECAM to give a market advantage to selected European TV manufacturers. Back.

Note 23: See the account in H. Landis Gabel, Competitive Strategies for Product Standards - The Strategic Use of Compatability Standards for Competitive Advantage, (London: McGraw Hill, 1991) p.137-142 Back.

Note 24: These issues can be treated in a variety of ways. Compare, for example, the table in Gable, Competitive Strategies, at p.13. Back.

Note 25: See, for example, the account of how aggressive industrial sponsorship shapes 'voluntary' broadcast standards setting in Stanley M. Besen and Leland L. Johnson, Compatitbility Standards, Competition and Innovation in the Broadcasting Industry, (Santa Monica: Rand, November, 1986). Back.

Note 26: Martin B.H. Weiss and Marvin Sirbu, "Technological Choice in Voluntary Standards Committees: An Empirical Analysis, Economics of Innovation and New Technology, V.1(1), 1990. Back.

Note 27: See the discussion in Paul A. David and Shane Greenstein, "The Economics of Compatibility Standards: An Introduction to Recent Research," CEPR Technical Paper, #207, Center For Economic Policy Research, Stanford University, June 1990, at p.39ff, especially citing unpublished work by Sirbu and Hughes. Back.

Note 28: It is important to note that compatability through interface standards is not the only way that interconnection/interoperability could be achieved. Converters and gateway services are alternatives. Each comes, however, with a significant likelihood of degraded performance or functionality relative to the use of compatability standards. In the NII context, gateway services are likely to arise wherever technically and economically feasible to interconnect networks running on different proprietary protocols. The degree and quality of interoperability such services will be capable of providing is greatly speculative at the moment. For that reason we presume that interface standardization will be the preferred alternative. Back.

Note 29: For a brief but dense and suggestive economic analysis of standards, see "Localized technological change and the evolution of standards as economic institutions," Christiano Antonelli, Information Economics and Policy, V.6, #3-4, December 1994, p.195-216. That issue of Information Economics and Policy, edited by Antonelli, was devoted to the economics of standards and contains many fine contributions. See, also, David and Greenstein, "Compatibility Standards", supra; and two contributions, one by Michael Katz and Carl Shapiro, "Systems Competition and Network Effects," and the other by Stanley Besen and Joseph Farrell, "Choosing How to Compete: Strategies and Tactics in Standardization," in the Journal of Economic Perspectives, v.8#2, (1994), p.93-131. Back.

Note 30: The complexities of such contexts are well analyzed by Douglas C. North, Insitutions, institutional change and economic performance, (Cambridge: Cambridge University Press, 1991) Back.

Note 31: On technological change see, e.g., Dosi, Nelson/Winter; on organizational change see Galbraith; on imperfect markets and increasing returns see Arthur, David; on new trade theory see Krugman. Back.

Note 32: See, e.g., Joseph Farrell and Garth Saloner, "Installed Base and Compatibility: Innovation, Product Preannouncements and Predation," American Economic Review, v.76, 1986 p.940-955; Michael L. Katz and Carl Shapiro, "Technology Adoption in the Presense of Network Externalities," Journal of Political Economy, V.94(4), August, 1986, p.822-841. Back.

Note 33: See the discussion by Antonelli, "Localized technological change," supra, at p.201-202. Back.

Note 34: Antonelli remarks that in modeling the emergence of de facto standards, the economics of technical change and of standards are so intertwined that they really cannot be separated. Ibid., at p.205. Back.

Note 35: On the concept of path-dependence, see W. Brian Arthur, "Competing Technologies and Lock-in by Historical Events: The Dynamics of Allocation Under Increasing Returns," CEPR Publication #43, (Stanford: Center for Economic Policy Research, September 1985) Back.

Note 36: See the account in Gabel, Competitive Strategies, at p.67-70. Back.

Note 37: See, in particular, Paul A. David, "Path-dependence and predictability in dynamic systems with local network externalities: A paradigm for Historical Events," in Dominique Foray and Christopher Freeman, eds., Technology and the Wealth of Nations, (NY and London: Pinter, 1993). Back.

Note 38: A range of the manipulative possibilities are explored for compatibility standards by David and Steinmuller, "Economics of compatibility," supra, at p. 221-223. Back.

Note 39: For more detail on the economics of this point, see David and Greenstein, "Compatibility Standards", supra, at pages 10-23 and the sources cited there. Back.

Note 40: Hugh Collins, "Conflict and Cooperation in the Establishment of Telecommunications and Data Communications Standards in Europe," in H. Landis Gabel, Ed., Product Standardization and Competitive Strategy, (Amsterdam: North Holland, 1987), p.125ff. Back.

Note 41: 17 U.S.C.A. section 106 (1). Back.

Note 42: 17 U.S.C.A. section 107. Back.

Note 43: 17 U.S.C.A. section 106(5). Back.

Note 44: 17 U.S.C.A. section 106(4). Back.

Note 45: 17 U.S.C.A. section 106(3). Back.

Note 46: 17 U.S.C.A. section 106(2). Back.

Note 47: Whelan Assocs., Inc. v. Jaslow Dental Lab, Inc., 727 F.2d 1222 (3d Cir. 1986). Back.

Note 48: [48] "Decompilation" is a method of reverse engineering software involving the translation of machine-readable object code into a higher level, human-readable form. Some argue that decompilation enables programmers to understand a program so as to make interoperable products; others argue that it enables them to copy the essentially creative structure of a particular piece of software and effectively engage in software piracy. Back.

Note 49: Computer Assocs. International v. Altai, Inc., 23 U.S.P.Q.2d 1241 (2d Cir. 1992). Back.

Note 50: Sega Enterprises, Ltd. v. Accolade, Inc., 24 U.S.P.Q.2d 1561 (9th Cir. 1992)(allowing Accolade to decompile Sega's game cartridges to discover interfaces that would enable Accolade to make games that could work on the Sega platform and adjudging those interfaces to be unprotected "functional" subject matter). Back.

Note 51: See Lotus Development Corp. v. Paperback Software Int'l., 740 F. Supp. 37 (D. Mass. 1990)(in the 1st Cir.); Computer Assocs. (2d Cir.); Plains Cotton Coop v. Good Pasture Computer Serv., 807 F.2d 1222 (5th Cir. 1987); Engineering Dynamics Inc. v. Structural Software Inc., et al, No 89-1655 (E.D. Law 1991 5th Cir); Sega (9th Cir.); Autoskills Inc. v. National Education Support System, No. 91-960-M and 91-740-M (D. N.M. 1992)(in the 10th Cir.); CMAX/Cleveland, Inc. d/b/a ComputerMax v. UCR, Inc., No. 91-75-AGH (M.D. Ga., September 25, 1992) (in the 11th Cir.); Atari Games Corp. v. Nintendo of America, Inc., 975 F.2d 832 (Fed. Cir. 1992). Back.

Note 52: Lotus Development Corp. v. Borland Int'l, Inc., No. 93-2214 (1st Cir. March 9, 1995); Computer Assocs. (2d Cir); Sega (9th Cir.); Gates Rubber Co. v. Bando Chem. Indus., Ltd., 9 F.3d 823 (10th Cir. 1993); CMAX/Cleveland (in the 11th Cir.); Atari Games (Fed. Cir.) Back.

Note 53: Whelan at 1236. Back.

Note 54: Computer Assocs. at 1252-53. Back.

Note 55: See Sega at 1557-59. Back.

Note 56: As suggested above, while the common law is evolving in the specified direction, some powerful interests would like the law changed, by legislative means if necessary. For example, draft recommendations by the Clinton Administration's Working Group on Intellectual Property Rights, could be seen as narrowing the scope of "fair use." Thus, there is no certainty that the trend will continue along its current trajectory. Back.

Note 57: See Sega at 1569, 1572-74. Back.

Note 58: Atari Games at 840 and 844-45. Back.

Note 59: Diamond v. Diehr, 450 U.S. 175, 101 S.Ct. 1495, 67 L.Ed. 2d 311 (1981) (a mathematical algorithm used to calculate the cure rate for rubber held patentable). Back.

Note 60: Gottschalk v. Benson, 409 U.S. 63, 93 S.Ct. 253, 34 L.Ed. 3d 273 (1972) (like phenomena of nature, mental processes, and abstract intellectual concepts, a mere mathematical algorithm is not patentable). Back.

Note 61: See, e.g., Arrythmia Research Technology, Inc. v. Corazonix Corp., 958 F.2d 1053 (Fed. Cir. 1992) (algorithm that analyzed human heart electrocardiographic signals held patentable because its process steps "transformed one physical, electronic signal into another" and the output was not a mere number but a readable pattern). Back.

Note 62: However, a revision of UCC Article 2 is underway and in that context a new article 2B is being considered which would address the issue of software licensing. Early drafts would validate shrinkwrap licenses in most circumstances-- including shrinkwrap licenses that prohibit reverse engineering. See generally, Mark Lemley, "Intellectual Property and Shrinkwrap Licences," 68 So. Cal. L. Rev. 1239 (July 1995). Back.

Note 63: For a similar view of the "essential facilities" doctrine, see Penelope A. Preovolos, "Litigation in the Interface: Connecting to 'Essential Facilities,'" Intellectual Property (San Francisco: The Recorder Publishing Co., March 1995). Back.

Note 64: ]224 U.S. 383 (1912). Back.

Note 65: Otter Tail Power Co. v. United States, 410 U.S. 366, 93 S.Ct. 1022, 35 L.Ed. 359 (1973) (Court held refusal by electric utility company to sell power to municipalities which had chosen to own their own retail distribution systems a violation of Sherman Act Section 2 since it was using its market power in one market (transmission) to further a monopoly in another market (retail distribution) Back.

Note 66: ]708 F.2d 1081 (7th Cir. 1983). Back.

Note 67: Ibid. at 1132-33. Back.

Note 68: See generally, Areeda, "Essential Facilities: An Epithet in Need of limiting Principles," 58 Antitrust L. J. 841, 844-45 (1990). Back.

Note 69: Alaska Airlines Inc. v. United Airlines Inc., 948 F.2d 536 at 544 (9th Cir. 1991). Back.

Note 70: Ibid. at 546-49. Back.

Note 71: See Data General Corp. v. Grumman Systems Support Corp., 36 F.3d 1147 (1st Cir. 1994); and 35 U.S. C.A. section 27(d)(1988). See also, Simpson v. Union Oil Co., 377 U.S. 13, 24, 84 S.Ct. 1051, 1058, 12 L.Ed. 2d 98 (1964). Back.

Note 72: Rural Tel. Serv. Co. v. Feist Publications, Inc., 957 F.2d 765, 767-69 ( 10th Cir. 1992). Back.

Note 73: Associated Press et al v. United States, 326 U.S. 1, 65 S.Ct. 1416, 89 L.Ed. 2013 (1945). Back.

Note 74: See Data General at 1187-88. Back.

Note 75: Alaska Airlines at 547-48. See also, Berkey Photo Inc. v. Eastman Kodak Co., 603 F.2d 263 at 294 (2d Cir. 1979). Back.

Note 76: [76] 13 FCC 2nd 420 at 437 (1968), cited in Brock, Telecommunication Policy for the Information Age,: From Monopoly to Competition (Cambridge: Harvard University Press, 1994) p. .83 Back.

Note 77: Brock at p. 86 Back.

Note 78: Paul David, "Narrow Windows, Blind Giants and Angry Orphans: The Dynamics of Systems Rivalries and Dilemmas of Technology Policy", CEPR Paper No. 10, Stanford University, March 1986. Back.

Note 79: This argument is made, for example, by Sanford V. Berg, "Public Policy and Corporate Strategies in the AM Stereo Market," in Gabel, Product Standardizat Back. ]

 

CIAO home page