Archives

Network Neutrality and Primer Courtesy of USIIA

Net Neutrality

Network neutrality is the concept that everyone should have equal access to the Worldwide Web. Amazon should not be able to pay to have its Web site load faster than a mom-and-pop e-commerce site, for example. After Comcast was accused of blocking peer-to-peer downloading websites, the FCC decided to craft rules that would ban ISPs from discriminating based on content. It was OK to slow down your entire network during peak times, for example, but you couldn't block a particular site, like BitTorrent. The rules approved by the FCC in December 2010 give the commission the authority to step into disputes about how ISPs are managing their networks or initiate their own investigations if they think ISPs are violating its rules.

FCC Report and Order on Net Neutrality: http://transition.fcc.gov/Daily_Releases/Daily_Business/2010/db1223/FCC-10-201A1.pdf

Thanks to David McClure and USIIA for use of the following materials:

Introduction

In the 1967, military planners in the Pentagon realized that their communications networks had a fatal flaw – they were based on the telephone system.  For efficiency, the telephone system operated through a series of centralized switching facilities.  If an enemy could knock out these centers, they could cripple military operations at a critical moment in battle.  The planners huddled with academics to seek a solution to this weakness in America’s defense system.

 Thus was born the Internet.

Researchers eventually developed a network that could be used for backup communication for the military.  This network would carry data; it would be capable of surviving multiple strikes from an enemy and continue to operate; and it would be absolutely unreliable.

By 1983, the major building blocks of this Internet were in place, in the form we use today.  These building blocks include TCP/IP, the combination of Internet Protocol (IP) and Transmission Control Protocol (TCP).  The IP protocol, because it was designed for survivability rather than reliability, has unreliable delivery as one of its fundamental principles.  It is the software on the sending and receiving ends that must be prepared to recognize data loss, retransmitting data as often as necessary to achieve its ultimate delivery.

This means that an IP packet can be discarded at any time -- no guarantee is made that any particular packet will be delivered. In fact, should the buffer of any node on the Internet become full, the TCP/IP protocol simply dumps whatever packets are left over.  They are lost forever, but the Internet isn’t concerned with that.    TCP was designed to build in some reliability and provide for transmission of lost packets of information, as well as ensuring that all packets are transmitted across the fastest and most reliable pathways.

Nonetheless, the Internet as we have always known it was deliberately created to be unreliable.  There is no particular guarantee that any packet will arrive at all.  And if it does arrive, the speed at which it arrives can’t be guaranteed. 

For simple email and web browsing, this causes only minor problems.  For advanced applications such as Voice over IP and video streaming, it is a disaster that is simply unacceptable.

A Non-existent Problem

The concept of “network neutrality” was created by Internet companies (The High Tech Broadband Coalition) and endorsed by the major network operators as a means to assure non-discrimination against particular sites on the Internet and devices used to access the Internet.  This concept was also endorsed by then FCC Chairman Michael Powell in his “Four Freedoms,” and last year was endorsed by the FCC as a policy platform.

In the past few months, executives from some network operating companies have expressed concern about an issue that was not included in the doctrines of network neutrality – the issue of how to guarantee the reliability of advanced Internet services over a network that was designed to be unreliable.  One approach that was mentioned was to give higher priority to packets for these advanced services – and to have the companies that benefited from this priority pay more for the additional reliability.

The response from the content companies, self-appointed consumer advocates and the media was instantaneous.  Campaigns were launched, Congress was lobbied, media wailing predicted the “end of the Internet as we know it,” and the telephone companies that operate networks were blamed for everything from gross profiteering to sabotage of the Internet.  Pundits whined that we must maintain the Internet just as it was in 1983 in order to maintain consumer rights and “fairness.”

All of which missed the point that the Internet is not and will never again be the way it used to be.  We have simply outgrown the old, unsophisticated, unreliable Internet and instead ushered in a wealth of new products and services over broadband networks that demand reliability.

In today’s Internet, there are few Internet Service Providers who are not also Network Operators, so creating artificial distinctions between the two makes little sense.  In the broadband environment of today and tomorrow there is not battle for the last mile – there is only a battle to deliver the best services with the highest reliability.  The Internet of 40 years ago doesn’t meet the demands of 21st-century consumers, and Congress cannot build a coherent broadband policy for America based on technologies that are already obsolete.

The Real Problem

To meet the needs of consumers in the 21st Century, we have created an Internet that far surpasses the vision of military and academic planners of four decades ago.  They did not envision voice communication over the Internet.  They did not envision a broadband network over which consumers would demand reliable and uninterrupted services.  And they did not envision predatory Internet applications that are designed to seize and use every single bit of available bandwidth.

Some peer to peer networks do exactly that.  They expand to take whatever bandwidth the network has in order to allow faster uploads and downloads.  Add more capacity, and these applications simply adjust to take more.  They create “super nodes” that already have brought down corporate networks and Internet service providers.  They have no respect for network neutrality, consumer rights or other applications.  Voice calls become unreliable.  Video streaming becomes unreliable.  And network operators are striving to use new measures to keep pace.

This is a problem that will not be solved easily.  Nor will address future problems as more computer viruses and video sharing services come on stream, further choking other advanced applications.  Though there is not at present a good answer to this dilemma, but one thing is sure – legislating some form of “neutrality” means that we will also legislate against technical solutions to predatory applications, permanently harming the Internet, its applications, and its consumers.

Three Critical Questions

In covering the issue of “network neutrality,” most of the media failed to look beyond the simple knee-jerk responses and accusations to ask three critical questions:

1.      Is this “prioritization of packets” even possible over the public Internet?

2.      If it is possible, is it feasible?

3.      If it is possible and feasible, is it being used today and are consumers being harmed?

Prioritization is possible today in three ways.  The first is through virtual private networks, which can be created by anyone and are actually in widespread use today.  These arguably violate the principle of “neutrality,” since individuals and companies can provide a “network within the Internet” to secure their transmissions.  It is not widely used by e-commerce and we-based businesses, though that has always been a possibility.

The second is to take the priority packets off of the Internet and run them on a faster private network that does not have the inherent weaknesses of the public Internet.  In which case the public Internet suffers no harm.  Moving the packets onto a private network separate from the Internet is already done today with no harm and no complaints from consumers.

In fact, many of the VoIP vendors that are competing with traditional phone companies for local and long distance telephony are using private networks for their VoIP offerings in order to give consumers the reliability that voice traffic requires and to ensure communications capabilities in an emergency.  Banning the use of such private networks would place consumers at risk as well as substantially degrading the telephone service they receive today from competitive telephone services.

The third is to alter the protocols of the Internet so that some packets are given priority and reliability – a process that would entail a major global effort to re-write the standards of the Internet and gain consensus.  This approach would also require network operators to utilize “deep packet sniffing,” looking into each packet’s contents in order to decide which ones get priority.  This is not feasible in a country that values privacy and free speech.

The Bottom Line

Legislating “network neutrality,” no matter how well-intended, takes America back to the communications technologies of 1983.  It literally destroys advances made in broadband and broadband technologies, and could serve to permanently cripple America’s ability to compete in the global economy as well as the ability of consumers to maintain their current choices in telecommunications services.

Legislating against any of the forms of network management currently in use – use of private networks for some services or use of VPNs -- harms consumers and businesses today.  Legislating against future technologies – even packet optimization, if this can be done without threatening privacy and free speech -- will stifle innovation and cripple America’s ability to deal with spam, viruses and predatory applications over the Internet.

At the same time, concerns about potential abuses of prioritization have already been addresses.  Begin with the simple assumption that in today’s competitive environment – in which most Americans can choose between DSL, Cable Internet, satellite broadband, wireless broadband and cellular broadband (without even addressing other and emerging forms of broadband service such as Broadband Over Powerline), no sane company would dare to embrace any technology or service that would harm consumers.  The potential loss of customers would simply be too great to risk.

If a company were to act in such a manner, the Federal Communications Commission has already gone on record – and demonstrated – that it is willing and able to act decisively.

In the current environment, Congress has demonstrated a willingness to quash innovation and new services by threatening legislation against ideas that haven’t even been tested internally, much less put into play.

By permitting ideas to be explored and tested, and rejecting calls for legislation to prevent a problem that doesn’t exist, Congress will allow broadband Internet companies to seek and propose innovative ways to solve bandwidth problems in the present and future.  While some of these may prove untenable, and others unpopular, allowing ideas to be advanced is the bedrock of American innovation and the future of broadband services to the nation.